Is it irrational that this makes me a little anxious about job security over the longterm? Idk why but this was my initial reaction when learning about this.
Given the scenario where copilot and its likes becomes used in a widespread manner. Can it be argued that this might improve productivity but stifle innovation?
Im pretty early in my career but the rate things are capable of changing soon doesn’t sit too well with me.
In the 50s, we programmed computers with punch cards. Who does now? How many web developers today could tell the difference between `malloc` and `calloc`? Probably not that many.
For a lot of developers, programming today bears very little relation to programming decades ago. Copilot is like any other innovation - it obsoletes some skills, and it introduces new ones.
I doubt copilot will reduce the need for engineers: but it may change the work they do. But that's no different to any other industry.
I've been programming since the '80s. It's my opinion that the age of humans writing code is coming to a close. Perhaps another 20 years or so, with the peak in ~10; but I'm less certain about the timeline than the destination. There will still be a long tail, but most of the human work will shift to design and wrangling algorithms. The remnant will be hobbyist, such as Commodore 64 programmers today.
Future developers might be more like architects guiding AIs and then occasionally jumping in to hand-hold or correct the result.
- Copilot is qualitatively different from the kinds of automation of programming we've seen before.
- It's barely version 1.0 of this kind of thing. Deep learning has been advancing incredibly for a decade and doesn't seem to be slowing down. Researchers also work on things like mathematical problem-solving, which would tie in to "real work" and not just the boilerplate.
- In past examples of AI going from subhuman to superhuman, e.g. chess and go, the practitioners did not expect to be overtaken just a few years after it had not even felt like real competition. You'd read quotes from them about the ineffability of human thought.
What to do personally, I don't know. Stay flexible?
It would be more like we still write asm but we have editors that let you write a little C code and then it spits out a paragraph of ‘reasonable’ asm that still has to be maintained.
I expect most human intellectual activities of today (from coding to scriptwriting to medicine) can be performed by machines if the current trend continues.
Put it this way: in 5 years will there be an AI that's better than 90% of unassisted working programmers at solving new leetcode-type coding interview questions posed in natural language? Arranging an actual bet is too annoying, but that development in that timeframe doesn't seem unlikely. It might take more than a scaled-up GPT, but as I said, people are working on those other directions too.
In that future, already, the skills you get hired for are different from now (and not just in the COBOL-versus-C sense). Maybe different people with a quite different mix of talents are the ones doing well.
Yes, and there were people in the 1960s who thought computers of the time were only a decade away from being smarter than humans. The question is one of category -- Go is something that a computer could conceivably be better than a human being at. There were certainly Go programs better than some human beings at that time. "Reading a human language document, communicating with stakeholders to understand the requirements in human language, understanding the business requirements of a large codebase, and writing human-readable code" is so categorically different than what Copilot does that, and something that no computer is currently capable of. If such a thing is even possible, we haven't even begun to tackle it.
> in 5 years will there be an AI that's better than 90% of unassisted working programmers at solving new leetcode-type coding interview questions posed in natural language?
I think that's highly unlikely, but it is within the bounds of possibility given what we know about AI currently (and probably, like GPT, it will only work under specific constraints). But the gap between that and what an engineer does on a daily basis is enormous.
I could see it being huge for the GUI scraping market. Or imagine a browser plugin that watches what you're doing and then offers to 'rewire' the web page to match your habits.
Imagine some sort of Clippy assistant watching where you hover attention as you read HN for example. After a while it says 'say, you seem to be more interested in the time series than the tree structure, would you rather just look at the activity on this thread than the semantics?' Or perhaps you import/include a sentiment analysis library, and it just asks you whether you want the big picture for the whole discussion, or whether you'd rather drill down to the thread/post/sentence level.
I notice all the examples I'm thinking of revolve around a simple pattern of someone asking 'Computer, here is a Thing; what can you tell me about it?' and the computer offering simplified schematic representations that allow the person to home in on features of interest and either do things with them or posit relationships to other features. This will probably set off all sorts of arms races, eg security people will want to misdirect AI-assisted intruders, marketers will probably want tos tart rendering pages as flat graphics to maintain brand differentiation and engagement vs a 'clean web' movement that wants to get rid of visual cruft and emphasize the basic underlying similarities.
It will lead to quite bitter arguments about how things should be; you'll have self-appointed champions of aesthetics saying that AI is decomposing the rich variety of human creativity into some sort of borg-like formalism that's a reflection of its autistic creators, and information liberators accusing the first group of being obscurantist tyrants trying to profit off making everything more difficult and confusing than it needs to be.
A programmer's job bridges the informal and the formal. Previous automation practically always was about helping you work with the formal end. A tool that can bridge the informal and the formal on its own is new. That was my first point and most basically why I'm suspicious of dismissals. These developments don't have to 100% substitute for a current human programmer to change the economics of what talents are rewarded.
Every time this happens, everyone just shifts the goal posts and they now want more features, faster. The majority of software out there sucks. If programmers are now 2x faster, users will demand that some random crud app be at Google software quality. And Google's software will be unimaginable by today's standards.
All of this will increase the value delivered by software, which will bring in greater revenue, which will be reinvested in more developers.
Noone knows what future holds, so some anxiety is just a fuel for adaptation.
For example, should Copilot take a widespread use, the number and scale of projects that will have to be maintained expands too. Moreover, making sense of the quilt patchwork of bits and pieces of code, I'd guess, written in many iterations/versions of Copilot's prowess is a very much a secure, if soul-killing, job for many. Not much different from what maintenance jobs are now. You're lucky when a project retains some clear overspanning architecture/style.
Anybody remembers the joys of GUI wizards, with tons of auto-generated code that "just works, just now"? Remember that desire to suggest a healthy rewrite? Well, now you could probably also promise that it would be an even quicker rewrite too!
But even then, the final responsibility for the code is on the programmer. One maybe could forge the code quicker, but code review is still supposed to be a human's job. Hopefully.