I know what you mean, but thinking about it critically, this is just wrong. All software has bugs in it. Small bugs, big bugs, critical bugs, security bugs, everything. No code is immune. The largest software used by millions every day has bugs. Library code that has existed and been in use for 30 years has bugs.
I don't think you were actually thinking of this in your comparison, but I think it's actually a great analogy - code, like art, can be 95% complete, and that's usually enough. (For art, looks good and is what I wanted is enough, for code, does what I want right now, nevermind edge cases is enough.)
To me, code that is 95% correct will either fail catastrophically or give very wrong results. Imagine if the code you wrote was off 5% for every number it was supposed to generate. Code that is 99.99% correct will introduce subtle bugs.
* No shade to chatGPT, writing a function that calculates shap values is tough lol, I just wanted to see what it could do. I do think that, given time, it'll be able to write a days worth of high quality code in a few seconds.
This is the main reason I haven't actually incorporated any AI tools into my daily programming yet - I'm mindful that I might end up spending more time tracking down issues in the auto-generated code than I saved using it in the first place.
[0] You can see the results here https://twitter.com/NickFisherAU/status/1601838829882986496
Clearly ChatGPT is going to improve, and AI development is moving at a breakneck pace and accelerating. Dinging it for totally fumbling 5% or 10% of written code is completely missing the forest for the trees.
It can't? I could've sworn I've seen (cherry-picked) examples of it doing exactly that, when prompted. It even explains what the bug is and why the fix works.
I have no idea if the AI that's getting code 80% right today will get it 95% right in two years, but given current progress, I wouldn't bet against it. I don't think there's any fundamental reason it can't produce better code than I can, at least not at the "write a function that does X" level.
Whole systems are a way harder problem that I wouldn't even think of making guesses about.
I'm nonplussed by ChatGPT because the hype around it is largely the same as was for Github Copilot and Copilot fizzled badly. (Full disclosure: I pay for Copilot because it is somewhat useful).
Both answers were orders of magnitude wrong, and vastly different from each other.
JS code suggested for a simple database connection had glaring SQL injection vulnerabilities.
I think it's an ok tool for discovering new libraries and getting oriented quickly to languages and coding domains you're unfamiliar with. But it's more like a forum post from a novice who read a tutorial and otherwise has little experience.
There's no fundamental reason it can't be the world expert at everything, but that's not a reason to assume we know how to get there from here.
The fundamental design of transformer architecture isn't capable of what you think it is.
There are still radical, fundamental breakthroughs needed. It's not a matter of incremental improvement over time.
Whether 95% or 99.9% correct, when there is a serious bug, you're still going to need people that can fix the gap between almost correct and actually correct.
Sure you can't stick an entire project in there, but if you know the problem is in class Baz, just toss in the relevant code and it does a pretty damn good job.
> I know what you mean, but thinking about it critically, this is just wrong. All software has bugs in it. Small bugs, big bugs, critical bugs, security bugs, everything. No code is immune. The largest software used by millions every day has bugs. Library code that has existed and been in use for 30 years has bugs.
All software has bugs, but it's usually far better that "95% right." Code that's only 95% right probably wouldn't pass half-ass testing or a couple of days of actual use.
The problem of a vengeful god who demands the slaughter of infidels lies not in his existence or nonexistence, but peoples' belief in such a god.
Similarly, it does not matter whether AI works or it doesn't. It's irrelevant how good it actually is. What matters is whether people "believe" in it.
AI is not a technology, it's an ideology.
Given time it will fulfil it's own prophecy as "we who believe" steer the world toward that.
That's what's changing now. It's in the air.
The ruling classes (those who own capital and industry) are looking at this. The workers are looking too. Both of them see a new world approaching, and actually everyone is worried. What is under attack is not the jobs of the current generation, but the value of human skill itself, for all generations to come. And, yes, it's the tail of a trajectory we have been on for a long time.
It isn't the only way computers can be. There is IA instead of AI. But intelligence amplification goes against the principles of capital at this stage. Our trajectory has been to make people dumber in service of profit.
I'm a bit surprised that it got a lookup wrong, but for any other domain, describing it as a "novice" is understating the situation a lot.
Current SOTA: https://openai.com/blog/vpt/
It's obvious how an expert at regurgitating StackOverflow would be able to correct an NPE or an off-by-one error when given the exact line of code that error is on. Going any deeper, and actually being able to find a bug, requires understanding of the codebase as a whole and the ability to map the code to what the code actually does in real life. GPT has shown none of this.
"But it will get better over time" arguments fail for this because the thing that's needed is a fundamentally new ability, not just "the same but better." Understanding a codebase is a different thing from regurgitating StackOverflow. It's the same thing as saying in 1980, "We have bipedal robots that can hobble, so if we just improve on that enough we'll eventually have bipedal robots that beat humans at football."
Wow, yes. This is exactly what I've been thinking but you summed it up more eloquently.
The political/social factors which apply to the life-and-death decisions made driving a car, don't apply to whether one of the websites I work on works perfectly.
I'm 35, and I've paid to write code for about 15 years. To be honest, ChatGPT probably writes better code than I did at my first paid internship. It's got a ways to go to catch up with even a junior developer in my opinion, but it's only a matter of time.
And how much time? The expectation in the US is that my career will last until I'm 65ish. That's 30 years from now. Tesla has only been around 19 years and now makes self-driving cars.
So yeah, I'm not immediately worried that I'm going to lose my job to ChatGPT in the next year, but I am quite confident that my role will either cease existing or drastically change because of AI before the end of my career. The idea that we won't see AI replacing professional coders in the next 30 years strains credulity.
Luckily for me, I already have considered some career changes I'd want to do even if I weren't forced to by AI. But if folks my age were planning to finish out their careers in this field, they should come up with an alternative plan. And people starting this field are already in direct competition to stay ahead of AI.
Roads are extremely regular, as things go, and as soon as you are off the beaten path with those AIs start having trouble too.
It seems that in general that the long tail will be problematic for a while yet.
It may be a significant chunk of the butt-in-seat-time under our archaic 40hour/week paradigm, but it's not a significant chunk of the programmer's actual mental effort. You're not going to be able to get people to work 5x more intensely by automating the boring stuff, that was never the limiting factor.
Other architectures exist, but you can notice from the lack of people talking about them that they don't produce any output nearly as developed as the chatGPT kind. They will get there eventually, but that's not what we are seeing here.
What is that?
But of all the examples of cheap and convenient beating quality: photography, film, music, et al, the many industries that digital technology has disrupted, newspapers are more analogous than builders. Software companies are publishers, like newspapers. And newspapers had entire building floors occupied by highly skilled mechanical typesetters, who have long been replaced. A handful of employees on a couple computers could do the job faster, more easily, and of good enough quality.
Software has already disrupted everything else, eventually it would disrupt the process of making software.
In what sense did Copilot fizzle badly? It's a tool that you incorporated into your workflow and that you pay money for.
Does it solve all programming? No, of course not, and it's far from there. I think even if improves a lot it will not be close to replacing a programmer.
But a tool that lets you write code 10x,100x faster is a big deal. I don't think we're far away from a world in which every programmer has to use AI to be somewhat proficient in their job.
I think majority wouldn't know what hit them when the time comes. My experience with chatgpt has been highly positive changing me from a skeptic to a believer. It takes a bit of skill to tune the prompts but I got it to write frontend, backend, unit test cases, automation test cases, generate test data flawlessly. I have seen and worked with much worse developers than what this current iteration is.
To get good output on larger scales we're going to need a model that is hierarchical with longer term self attention.
The other patterns of AI that seem to be able to arrive at novel solutions basically use a brute force approach of predicting every outcome if it has perfect information or a brute force process where it tries everything until it finds the thing that "works". Both of those seem approaches seem problematic in the "real world". (though i would find convincing the argument that the billions of people all trying things act as a de facto brute force approach in practice)
For someone to be able to do a novel implementation in a field dominated by AI might be impossible, because core foundational skills cant get developed anymore by humans for them to achieve heights that the AI hasn't reached yet. We are now stuck, things cant really get "better", we just get maybe iterative improvements on how the AI implements the already arrived at solutions.
TLDR, lets sic the AI on making a new Javascript framework and see what happens :)
It makes sense. My own experience driving a non-Tesla car the speed limit nearly always, is that other drivers will try to pressure you to do dangerous stuff so they can get where they're going a few seconds faster. I sometimes give into that pressure, but the AI doesn't feel that pressure at all. So if you're paying attention and see the AI not giving into that pressure, the tendency is to take manual control so you can. But that's not safer--quite the opposite. That's an example of the AI driving better than the human.
On the opposite end of the social anxiety spectrum, there's a genre of pornography where people are having sex in the driver's seats of Teslas while the AI is driving. They certainly aren't intervening 3 times in 20 minutes, and so far I don't know of any of these people getting in car accidents.