This is because most people on HN who say they are skeptical about AI mean skeptical of AI capabilities. This is usually paired with statements that AI is "hitting a wall." See e.g.
> I'm very skeptical. I see all the hype, listen to people say it's 2 more years until coding is fully automated but it's hard for me to believe seeing how the current models get stuck and have severe limitations despite a lot of impressive things it can do. [>>44015865 ]
> As someone who is mildly skeptical of the current wave of LLM hype and thinks it's hitting a wall... [>>43634169 ]
(that was what I found with about 30 seconds of searching. I could probably find dozens of examples of this with more time)
I think software developers need to urgently think about the consequences of what you're saying, namely what happens if the capabilities that AI companies are saying are coming actually do materialize soon? What would that mean for society? Would that be good, would that be bad? Would that be catastrophic? How crazy do things get?
Or put it more bluntly, "if AI really goes crazy, what kind of future do you want to fight for?"
Pushing back on the wave because you take AI capabilities seriously is exactly what more developers should be doing. But dismissing AI as an AI skeptic who's skeptical of capabilities is a great way to cede the ground on actually shaping where things go for the better.
I’m definitely not skeptical of its abilities, I’m concerned by them.
I’m also skeptical that the AI hype is going to pan out in the manner people say it is. If most engineers make average or crappy code, then how are they going to know if the code they are using is a disaster waiting to happen?
Verifying an output to be safe depends on expertise. That expertise is gained through the creation of average or bad code.
This is a conflict in process needs that will have to be resolved.
These LLMs may not be inherently evil, but their impact on society could be potentially destabilising.
I'm not saying there is no evil, but that argument at least holds little ground.
These systems (LLMs, diffusion) yield imitative results just powerful enough to eventually threaten the jobs of most non-manual laborers, while simultaneously being not powerful enough (in terms of capability to reason, to predict, to simulate) to solve the hard problems AI was promised to solve, like accelerating cancer research.
To put it another way, in their present form, even with significant improvement, how many years of life expectancy can we expect these systems to add? My guess is zero. But I can already see a huge chunk of the graphic designers, the artists, the actors, and the programmers or other office workers being made redundant.