- Copilot is qualitatively different from the kinds of automation of programming we've seen before.
- It's barely version 1.0 of this kind of thing. Deep learning has been advancing incredibly for a decade and doesn't seem to be slowing down. Researchers also work on things like mathematical problem-solving, which would tie in to "real work" and not just the boilerplate.
- In past examples of AI going from subhuman to superhuman, e.g. chess and go, the practitioners did not expect to be overtaken just a few years after it had not even felt like real competition. You'd read quotes from them about the ineffability of human thought.
What to do personally, I don't know. Stay flexible?
Put it this way: in 5 years will there be an AI that's better than 90% of unassisted working programmers at solving new leetcode-type coding interview questions posed in natural language? Arranging an actual bet is too annoying, but that development in that timeframe doesn't seem unlikely. It might take more than a scaled-up GPT, but as I said, people are working on those other directions too.
In that future, already, the skills you get hired for are different from now (and not just in the COBOL-versus-C sense). Maybe different people with a quite different mix of talents are the ones doing well.
Yes, and there were people in the 1960s who thought computers of the time were only a decade away from being smarter than humans. The question is one of category -- Go is something that a computer could conceivably be better than a human being at. There were certainly Go programs better than some human beings at that time. "Reading a human language document, communicating with stakeholders to understand the requirements in human language, understanding the business requirements of a large codebase, and writing human-readable code" is so categorically different than what Copilot does that, and something that no computer is currently capable of. If such a thing is even possible, we haven't even begun to tackle it.
> in 5 years will there be an AI that's better than 90% of unassisted working programmers at solving new leetcode-type coding interview questions posed in natural language?
I think that's highly unlikely, but it is within the bounds of possibility given what we know about AI currently (and probably, like GPT, it will only work under specific constraints). But the gap between that and what an engineer does on a daily basis is enormous.
I could see it being huge for the GUI scraping market. Or imagine a browser plugin that watches what you're doing and then offers to 'rewire' the web page to match your habits.
Imagine some sort of Clippy assistant watching where you hover attention as you read HN for example. After a while it says 'say, you seem to be more interested in the time series than the tree structure, would you rather just look at the activity on this thread than the semantics?' Or perhaps you import/include a sentiment analysis library, and it just asks you whether you want the big picture for the whole discussion, or whether you'd rather drill down to the thread/post/sentence level.
I notice all the examples I'm thinking of revolve around a simple pattern of someone asking 'Computer, here is a Thing; what can you tell me about it?' and the computer offering simplified schematic representations that allow the person to home in on features of interest and either do things with them or posit relationships to other features. This will probably set off all sorts of arms races, eg security people will want to misdirect AI-assisted intruders, marketers will probably want tos tart rendering pages as flat graphics to maintain brand differentiation and engagement vs a 'clean web' movement that wants to get rid of visual cruft and emphasize the basic underlying similarities.
It will lead to quite bitter arguments about how things should be; you'll have self-appointed champions of aesthetics saying that AI is decomposing the rich variety of human creativity into some sort of borg-like formalism that's a reflection of its autistic creators, and information liberators accusing the first group of being obscurantist tyrants trying to profit off making everything more difficult and confusing than it needs to be.
A programmer's job bridges the informal and the formal. Previous automation practically always was about helping you work with the formal end. A tool that can bridge the informal and the formal on its own is new. That was my first point and most basically why I'm suspicious of dismissals. These developments don't have to 100% substitute for a current human programmer to change the economics of what talents are rewarded.