I think I'm slowly coming around to this viewpoint too. I really just couldn't understand how so many people were having widely different experiences. AI isn't magic; how could I have expected all the people I've worked with who struggle to explain stuff to team members, who have near perfect context, to manage to get anything valuable across to an AI?
I was original pretty optimistic that AI would allow most engineers to operate at a higher level but it really seems like instead it's going to massively exacerbate the difference between an ok engineer and a great engineer. Not really sure how I feel about that yet but at-least I understand now why some people think the stuff is useless.
Using search engines is a skill
But then my wife sort of handed me a project that previously I would have just said no to, a particular Android app for the family. I have instances of all the various Android technologies under my belt, that is, I've used GUI toolkits, I've used general purpose programming languages, I've used databases, etc, but with the possible exception of SQLite (which even that is accessed through an ORM), I don't know any of the specific technologies involved with Android now. I have never used Kotlin; I've got enough experience that I can pretty much piece it together when I'm reading it but I can't write it. Never used the Android UI toolkit, services, permissions, media APIs, ORMs, build system, etc.
I know from many previous experiences that A: I could definitely learn how to do this but B: it would be a many-week project and in the end I wouldn't really be able to leverage any of the Android knowledge I would get for much else.
So I figured this was a good chance to take this stuff for a spin in a really hard way.
I'm about eight hours in and nearly done enough for the family; I need about another 2 hours to hit that mark, maybe 4 to really polish it. Probably another 8-12 hours and I'd have it brushed up to a rough commercial product level for a simple, single-purpose app. It's really impressive.
And I'm now convinced it's not just that I'm too old a fogey to pick it up, which is, you know, a bit of a relief.
It's just that it works really well in some domains, and not so much in others. My current work project is working through decades of organically-grown cruft owned by 5 different teams, most of which don't even have a person on them that understands the cruft in question, and trying to pull it all together into one system where it belongs. I've been able to use AI here and there for some stuff that is still pretty impressive, like translating some stuff into psuedocode for my reference, and AI-powered autocomplete is definitely impressive when it correctly guesses the next 10 lines I was going to type effectively letter-for-letter. But I haven't gotten that large-scale win where I just type a tiny prompt in and see the outsized results from it.
I think that's because I'm working in a domain where the code I'm writing is already roughly the size of the prompt I'd have to give, at least in terms of the "payload" of the work I'm trying to do, because of the level of detail and maturity of the code base. There's no single sentence I can type that an AI can essentially decompress into 250 lines of code, pulling in the correct 4 new libraries, and adding it all to the build system the way that Gemini in Android Studio could decompress "I would like to store user settings with a UI to set the user's name, and then display it on the home page".
I think I recommend this approach to anyone who wants to give this approach a fair shake - try it in a language and environment you know nothing about and so aren't tempted to keep taking the wheel. The AI is almost the only tool I have in that environment, certainly the only one for writing code, so I'm forced to really exercise the AI.
Great Engineer + AI = Great Engineer++ (Where a great engineer isn't just someone who is a great coder, they also are a great communicator & collaborator, and love to learn)
Good Engineer + AI = Good Engineer
OK Engineer + AI = Mediocre Engineer
Now, an "effective engineer" can be a less battle-tested software developer, but they must be good at system design.
(And by system design, I don't just mean architecture diagrams: it's a personal culture of constantly questioning and innovating around "let's think critically to see what might go wrong when all these assumptions collide, and if one of them ends up being incorrect." Because AI will only suggest those things for cut-and-dry situations where a bug is apparent from a few files' context, and no ambitious idea is fully that cut-and-dry.)
The set of effective engineers is thus shifting - and it's not at all a valid assumption that every formerly good developer will see their productivity skyrocket.
If an OK engineer is still actively trying to learn, making mistakes, memorizing essentials, etc. then there is no issue.
On the other hand, if they're surrendering 100% of their judgment to AI, then they will be mediocre.
He took a couple days doing this, which was shocking to me. Such a waste of time that would have been better spent reading the code and improving any missing documentation - and most importantly asking teammates about necessary context that couldn't just be inferred from the code.
I suspect that well-engineered projects with plenty of test coverage and high-quality documentation will be easier to use AI on, just like they're easier for humans to comprehend. But you need to have somebody with the big picture still who can make sure that you don't just turn things into a giant mess once less disciplined people start using AI on a project.
That's a good insights. Its almost like to use AI tools effectively, one needs to stop caring about the little things you'd get caught up in if you were already familiar and proficient in a stack. Style guidelines, a certain idiomatic way to do things, naming conventions, etc.
A lot like how I've stopped organizing digital files into folders, sub folders etc (along with other content) and now I just just rely on search. Everything is a flat structure, I don't care where its stored or how it's organized as long as I can just search for it, that's what the computer is for, to keep track for me so I don't have to waste time organizing it myself.
Like wise for the code Generative AI produces. I don't need to care about the code itself. As long as its correct, not insecure, and performant, it's fine.
It's not 100% there yet, I still do have to go in and touch the code, but ideally I shouldn't have to, nor should I have to care what the actual code looks like, just the result of it. Let the computer manage that, not me. My role should be the system design and specification, not writing the code.
The people deciding how much OpenAI is worth would probably struggle to run first-time setup on an iPad.
I don't think that it lowers the bar there, if anything the bar is far harsher.
If I'm doing normal coding I make X choices per time period, with Y impacts.
With AI X will go up and the Y / X ratio may ALSO go up, so making more decisions of higher leverage!
The reason being that the boilerplate Android stuff is effectively given for free and not part of the context as it is so heavily represented in the training set, whereas the unique details of your work project is not. But finding a way to provide that context, or better yet fine-tune the model on your codebase, would put you in the same situation and there's no reason for it to not deliver the same results.
That it is not working for you now at your complex work projects is a limitation of tooling, not something fundamental about how AI works.
Aside: Your recommendation is right on. It clicked for me when I took a project that I had spent months of full-time work creating in C++, and rewrote it in idiomatic Go, a language I had never used and knew nothing about. It took only a weekend, and at the end of the project I had reviewed and understood every line of generated code & was now competent enough to write my own simple Go projects without AI help. I went from skeptic to convert right then and there.
However, the information-theoretic limitation of expressing what you want and how anyone, AI or otherwise, could turn that into commits, is going to be quite the barrier, because that's fundamental to communication itself. I don't think the skill of "having a very, very precise and detailed understanding of the actual problem" is going anywhere any time soon.
Nassim Taleb is the prophet of our times and he doesn't get enough credit.
(1) The process of creating "a very, very precise and detailed understanding of the actual problem" is something AI is really good at, when partnered with a human. My use of AI tools got immensely better when I figured out that I should be prompting the AI to turn my vague short request into a detailed prompt, and then I spend a few iteration cycles fixing up before asking the agent to do it.
(2) The other problem of managing context is a search and indexing problem, which we are really, really good at and have lots of tools for, but AI is just so new that these tools haven't been adapted or seen wide use yet. If the limitation of the AI was its internal reasoning or training or something, I would be more skeptical. But the limitation seems to be managing, indexing, compressing, searching, and distilling appropriate context. Which is firmly in the domain of solvable, albeit nontrivial problems.
I don't see the information theoretic barrier you refer to. The amount of information an AI can keep in its context window far exceeds what I have easily accessible to my working memory.
The gist being - language (text input) is actually the vehicle you have to transfer neural state to the engine. When you are working in a greenfield project or pure-vibe project, you can get away with most of that neural state being in the "default" probability mode. But in a legacy project, you need significantly more context to contrain the probability distributions a lot closer to the decisions which were made historically.
This makes me a little sad. Part of the joy of writing software is expressing yourself through caring about these little things. Stylistic coherence, adhering to consistent naming conventions, aligning code blocks, consistently applying patterns, respecting the way the language and platform's APIs work together rather than fighting it... heck, even tiny things like alphabetizing header declarations. None of these things make the finished product better/faster/more reliable, but all of these demonstrate something about the author: What he believes in. That he is willing to sand, polish and beautify the back of the cabinet that nobody is going to see. As Steve Jobs said:
"Even though it faces the wall and nobody will ever see it. You’ll know it’s there, so you’re going to use a beautiful piece of wood on the back. For you to sleep well at night, the aesthetic, the quality, has to be carried all the way through."
If you surrender to the AI, you're no longer carrying the aesthetic and quality all the way through. You're abandoning the artistry. You're living with the barf because it works, and because it's much harder to go back and beautify it than it is to build it beautifully from the start.There are so many unusual or one off use cases that would have normally required me to spend several hours locating and reading API documentation that I now just hand off to the AI. I am a big believer in their value. I’m getting more done than ever.
Now I could believe an intern would do such a thing. I’ve seen a structural engineer intern spend four weeks creating a finite element model of a single concrete vault. he could have treated the top deck as a concrete beam used conservative assumptions about the loading and solved it with pen and paper in 30 minutes.
However, that way of working can be exasperating for those who prefer a more deterministic approach, and who may feel frustrated by the sheer amount of slightly incorrect stuff being generated by the machine.
But then I suppose I should learn from my own experiences and not try to make information theoretic arguments on HN, since it is in that most terrible state where everyone thinks they understand it because they use "bits" all the time, but in fact the average HN denizen knows less than nothing about it because even their definition of "bit" actively misleads them and that's about all they know.