Does no one on HN believe in this anymore? Isn't this tech startup community meant to be the tip of the spear? We'll find out by 2030 either way.
Have we not seen enough of these people to know their character? They're predators who, from all accounts, sacrifice every potentially meaningful personal relationship for money, long after they have more than most people could ever dream of. If we legalized gladiatorial blood sport and it became a billion-dollar business, they'd be doing that. If monkey torture porn was a billion dollar business they'd be doing that.
Whatever the promise of actual AI (and not just performative LLM garbage), if created they will lock the IP down so hard that most of the population will not be able to afford it. Rich people get Ozempic, poor people get body positivity.
For example it isn't what you can do tinkering in your home/garage anymore; or what algorithm you can crack with your intrinsic worth to create more use cases and possibilities - but capital, relationships, hardware and politics. A recent article that went around, and many others are believing capital and wealth will matter more and make "talent" obsolete in the world of AI - this large figure in this article just adds money to that hypothesis.
All this means the big get bigger. It isn't about startup's/grinding hard/working hard/being smarter/etc which means it isn't really meritocratic. This creates an uneven playing field that is quite different than previous software technology phases where the gains/access to the gains has been more distributed/democratized and mostly accessible to the talented/hard working (e.g. the risk taking startup entrepreneur with coding skills and a love of tech).
In some ways it is kind of the opposite of the indy hacker stereotype who ironically is probably one of the biggest losers in the new AI world. In the new world what matters is wealth/ownership of capital, relationships, politics, land, resources and other physical/social assets. In the new AI world scammers, PR people, salespeople, politicians, ultra wealthy with power etc thrive and nepotism/connections are the main advantage. You don't just see this in AI btw (e.g. recent meme coins seen as better path to wealth than working due to weak link to power figure), but AI like any tech amplifies the capability of people with power especially if by definition the powerful don't need to be smart/need other smart people to yield it unlike other tech in the past.
They needed smart people in the past; we may be approaching a world where the smart people make themselves as a whole redundant. I can understand why a place like this doesn't want that to succeed, even if the world's resources are being channeled to that end. Time will tell.
The average person's utility from AI is marginal. But to a psychopath like Elon Musk who is interested in deceiving the internet about Twitter engagement or juicing his crypto scam, it's a necessary tool to create seas of fake personas.
I joined in 2012, and been reading since 2010 or so. The community definitely has changed since then, but the way I look at it is that it actually became more reasoned as the wide-eyed and naive teenagers/twenty-somethings of that era gained experience in life and work, learned how the world actually works, and perhaps even got burned a few times. As a result, today they approach these types of news with far more skepticism than their younger selves would. You might argue that the pendulum has swung too far towards the cynical end of the spectrum, but I think that's subjective.
Look at who is president, or who is in charge of the biggest companies today. It is extremely clear that intelligence is not a part of the reason why they are there. And with all their power and money, these people have essentially zero concern for any of the topics you listed.
There is absolutely no reason to believe that if artificial superintelligence is ever created, all of a sudden the capitalist structure of society will get thrown away. The AIs will be put to work enriching the megalomaniacs, just like many of the most intelligent humans are.
At that point, it's not technology, that's religion (or even bordering on cult-like thinking)
One time I bought a can of what I clearly thought was human food. Turns out it was just well dressed cat food.
> to unlimited energy, curing disease, farming innovations to feed billions,
Aw they missed their favorite hobby horse. "The children." Then again you might have to ask why even bother educating children if there is going to be "superintelligent" computers.
Anyways.. all this stuff will then be free.. right? Is someone going to "own" the superintelligent computer? That's an interesting proposition that gets entirely left out of our futurism fatansy.
No.
I mean, I had some faith in these things 15 years ago, when I was young and naive, and my heroes were too. But I've seen nearly all those heroes turn to the dark side. There's only so much faith you can have.
The lesson of everything that has happened in tech over the past 20 years is that what tech can do and what tech will do are miles apart. Yes, AGI could give everyone a free therapist to maximize their human well-being and guide us to the stars. Just like social media could have brought humanity closer together and been an unprecedented tool for communication, understanding, and democracy. How'd that work out?
At some point, optimism becomes willfully blinding yourself to the terrible danger humanity is in right now. Of course founders paint the rosy version of their product's future. That's how PR works. They're lying - maybe to themselves, and definitely to you.
I continue to be amazed at how desperate some of us are to live in Disney's Tomorrowland that we worship non-technical guys with lots of money who simply tell us that's what they're building, despite all actions to the contrary, sometimes baldfaced statements to the contrary (although always dressed up with faux-optimistic tones), and the negative anecdotes of pretty much anyone who gets close to them.
A lot of us became engineers because we were inspired by media, NASA, and the pretty pictures in Popular Science. And it sucks to realize that most if not all of that stuff isn't going to happen in our lifetimes, if at all. But you what guarantees it not to happen? Guys like Sam Altman and Larry Ellison at the helm, and blind faith that just because they have money and speak passionately that they somehow share your interests.
Or are you that guy who asks the car salesman for advice on which car he should buy? I could forgive that a little more, because the car salesman hasn't personally gone on the record about how he plans to use his business to fuck you.
AGI can go wrong in innumerable ways, most of which we cannot even imagine now, because we are limited by our 1 times human intelligence.
The liftoff conditions literally have to be near perfect.
So the question is, can humanity trust the power hungry billionaire CEOs to understand the danger and choose a path for maximum safety? Looking at how it is going so far, I would say absolutely not.
I don't consider models suddenly lifting off and acquiring 1000 times human intelligence to be a realistic outcome. To my understanding, that belief is usually based around the idea that if you have a model that can refine its own architecture, say by 20%, then the next iteration can use that increased capacity to refine even further, say an additional 20%, leading to exponential growth. But that ignores diminishing returns; after obvious inefficiencies and low-hanging fruit are taken care of, squeezing out even an extra 10% is likely beyond what the slightly-better model is capable of.
I do think it's possible to fight against diminishing returns and chip away towards/past human-level intelligence, but it'll be through concerted effort (longer training runs of improved architectures with more data on larger clusters of better GPUs) and not an overnight explosion just from one researcher somewhere letting an LLM modify its own code.
> can humanity trust the power hungry billionaire CEOs to understand the danger and choose a path for maximum safety
Those power-hunger billionaire CEOs who shall remain nameless, such as Altman and Musk, are fear-mongering about such a doomsday. Goal seems to be regulatory capture and diverting attention away from the more realistic issues like use for employee surveillance[0].