zlacker

[parent] [thread] 28 comments
1. pizzat+(OP)[view] [source] 2025-01-22 16:59:35
I am surprised at the negativity from HN. Their clear goal is to build superintelligence. Listen to any of the interviews with Altman, Demis Hassabis, or Dario Amodei (Anthropic) on the purpose of this. They discuss the roadmaps to unlimited energy, curing disease, farming innovations to feed billions, permanent solutions to climate change, and more.

Does no one on HN believe in this anymore? Isn't this tech startup community meant to be the tip of the spear? We'll find out by 2030 either way.

replies(12): >>scottL+A2 >>fallin+O2 >>akra+nB >>enrage+eN >>semi-e+pV >>stephe+511 >>timewi+V31 >>Aeolun+Y71 >>ramboj+6a1 >>Tiktaa+nh1 >>rachof+HP1 >>dinkum+j54
2. scottL+A2[view] [source] 2025-01-22 17:12:07
>>pizzat+(OP)
All of those things would put them out of business if realized and are just a PR smokescreen.

Have we not seen enough of these people to know their character? They're predators who, from all accounts, sacrifice every potentially meaningful personal relationship for money, long after they have more than most people could ever dream of. If we legalized gladiatorial blood sport and it became a billion-dollar business, they'd be doing that. If monkey torture porn was a billion dollar business they'd be doing that.

Whatever the promise of actual AI (and not just performative LLM garbage), if created they will lock the IP down so hard that most of the population will not be able to afford it. Rich people get Ozempic, poor people get body positivity.

replies(2): >>jstumm+762 >>dinkum+q54
3. fallin+O2[view] [source] 2025-01-22 17:13:21
>>pizzat+(OP)
This place seems to have been overwhelmed by bitterness and envy over the last 5 years or so.
replies(1): >>megous+2G
4. akra+nB[view] [source] 2025-01-22 20:35:13
>>pizzat+(OP)
Just my opinion/observation really but I believe its because people are implicitly entertaining the possibility that it is no longer about software or rather this announcement implicitly states that talent long term isn't the main advantage but instead hardware, compute, etc and most importantly the wealth and connections to gain access to large sums of capital. AI will enable capital/wealthy elite to have more of an advantage over human intelligence/ingenuity which I think is not typically what most hacker/tech forums are about.

For example it isn't what you can do tinkering in your home/garage anymore; or what algorithm you can crack with your intrinsic worth to create more use cases and possibilities - but capital, relationships, hardware and politics. A recent article that went around, and many others are believing capital and wealth will matter more and make "talent" obsolete in the world of AI - this large figure in this article just adds money to that hypothesis.

All this means the big get bigger. It isn't about startup's/grinding hard/working hard/being smarter/etc which means it isn't really meritocratic. This creates an uneven playing field that is quite different than previous software technology phases where the gains/access to the gains has been more distributed/democratized and mostly accessible to the talented/hard working (e.g. the risk taking startup entrepreneur with coding skills and a love of tech).

In some ways it is kind of the opposite of the indy hacker stereotype who ironically is probably one of the biggest losers in the new AI world. In the new world what matters is wealth/ownership of capital, relationships, politics, land, resources and other physical/social assets. In the new AI world scammers, PR people, salespeople, politicians, ultra wealthy with power etc thrive and nepotism/connections are the main advantage. You don't just see this in AI btw (e.g. recent meme coins seen as better path to wealth than working due to weak link to power figure), but AI like any tech amplifies the capability of people with power especially if by definition the powerful don't need to be smart/need other smart people to yield it unlike other tech in the past.

They needed smart people in the past; we may be approaching a world where the smart people make themselves as a whole redundant. I can understand why a place like this doesn't want that to succeed, even if the world's resources are being channeled to that end. Time will tell.

replies(1): >>gmd63+sI
◧◩
5. megous+2G[view] [source] [discussion] 2025-01-22 21:04:19
>>fallin+O2
Not envious of multi-billionaire's companies gathering capital, IP, knowledge and infrastructure for huge scale modern day private Stasi apparatuses. Just bitter.
◧◩
6. gmd63+sI[view] [source] [discussion] 2025-01-22 21:19:51
>>akra+nB
Exactly as you say. AI is imagined to be the wealthy nepotist's escape pod from an equal playing field and democratized access to information. Win at all cost soulless predators who find infinite sacrifice somehow righteous love games like the ones that macro-scale AI creates.

The average person's utility from AI is marginal. But to a psychopath like Elon Musk who is interested in deceiving the internet about Twitter engagement or juicing his crypto scam, it's a necessary tool to create seas of fake personas.

7. enrage+eN[view] [source] 2025-01-22 21:55:00
>>pizzat+(OP)
>> Does no one on HN believe in this anymore? Isn't this tech startup community meant to be the tip of the spear? We'll find out by 2030 either way.

I joined in 2012, and been reading since 2010 or so. The community definitely has changed since then, but the way I look at it is that it actually became more reasoned as the wide-eyed and naive teenagers/twenty-somethings of that era gained experience in life and work, learned how the world actually works, and perhaps even got burned a few times. As a result, today they approach these types of news with far more skepticism than their younger selves would. You might argue that the pendulum has swung too far towards the cynical end of the spectrum, but I think that's subjective.

replies(1): >>holodu+k31
8. semi-e+pV[view] [source] 2025-01-22 23:00:46
>>pizzat+(OP)
> They discuss the roadmaps to unlimited energy, curing disease, farming innovations to feed billions, permanent solutions to climate change, and more.

Look at who is president, or who is in charge of the biggest companies today. It is extremely clear that intelligence is not a part of the reason why they are there. And with all their power and money, these people have essentially zero concern for any of the topics you listed.

There is absolutely no reason to believe that if artificial superintelligence is ever created, all of a sudden the capitalist structure of society will get thrown away. The AIs will be put to work enriching the megalomaniacs, just like many of the most intelligent humans are.

9. stephe+511[view] [source] 2025-01-22 23:43:51
>>pizzat+(OP)
I'm sure some do, but understand what they're basically saying is "we will build an AI God, and it will save us from all our problems"

At that point, it's not technology, that's religion (or even bordering on cult-like thinking)

replies(1): >>dyausp+n61
◧◩
10. holodu+k31[view] [source] [discussion] 2025-01-23 00:01:50
>>enrage+eN
I think (big assumption) most here are from that same period/time. Most are in their late 30s, 40s. Kids, busy life etc. Not the young hacker mindsets, but the responsible maybe a bit stressed person.
replies(1): >>Aeolun+g81
11. timewi+V31[view] [source] 2025-01-23 00:07:56
>>pizzat+(OP)
> Their clear goal is to build superintelligence

One time I bought a can of what I clearly thought was human food. Turns out it was just well dressed cat food.

> to unlimited energy, curing disease, farming innovations to feed billions,

Aw they missed their favorite hobby horse. "The children." Then again you might have to ask why even bother educating children if there is going to be "superintelligent" computers.

Anyways.. all this stuff will then be free.. right? Is someone going to "own" the superintelligent computer? That's an interesting proposition that gets entirely left out of our futurism fatansy.

◧◩
12. dyausp+n61[view] [source] [discussion] 2025-01-23 00:30:59
>>stephe+511
I’m willing to believe. It’s probably the closest we’ve come to actually having a real life god. I’m going to get pushback on this but I’ve used o1 and it’s pretty mind blowing to me. I would say something 10x as intelligent with sensors to perceive the world and some sort of continuously running self optimization algorithm would essentially be a viable artificial intelligence.
replies(2): >>andrep+b62 >>dinkum+M54
13. Aeolun+Y71[view] [source] 2025-01-23 00:45:01
>>pizzat+(OP)
> Does no one on HN believe in this anymore?

No.

I mean, I had some faith in these things 15 years ago, when I was young and naive, and my heroes were too. But I've seen nearly all those heroes turn to the dark side. There's only so much faith you can have.

◧◩◪
14. Aeolun+g81[view] [source] [discussion] 2025-01-23 00:46:46
>>holodu+k31
I feel called out. But yeah, that seems to be on point.
15. ramboj+6a1[view] [source] 2025-01-23 00:59:03
>>pizzat+(OP)
that's the propaganda talking to you.
16. Tiktaa+nh1[view] [source] 2025-01-23 01:59:47
>>pizzat+(OP)
What if the AI doesn't want to do any of that stuff.
replies(1): >>Ukv+e62
17. rachof+HP1[view] [source] 2025-01-23 08:24:29
>>pizzat+(OP)
Do you want a superintelligence ruling over all humanity until the stars burn out controlled by these people?

The lesson of everything that has happened in tech over the past 20 years is that what tech can do and what tech will do are miles apart. Yes, AGI could give everyone a free therapist to maximize their human well-being and guide us to the stars. Just like social media could have brought humanity closer together and been an unprecedented tool for communication, understanding, and democracy. How'd that work out?

At some point, optimism becomes willfully blinding yourself to the terrible danger humanity is in right now. Of course founders paint the rosy version of their product's future. That's how PR works. They're lying - maybe to themselves, and definitely to you.

◧◩
18. jstumm+762[view] [source] [discussion] 2025-01-23 11:23:25
>>scottL+A2
I continue to be amazed at how motivated some of us are to make such cruel, far-reaching and empty claims with regards to people of some popularity/notoriety.
replies(2): >>gilmor+si3 >>scottL+xM3
◧◩◪
19. andrep+b62[view] [source] [discussion] 2025-01-23 11:23:58
>>dyausp+n61
Poe's law
◧◩
20. Ukv+e62[view] [source] [discussion] 2025-01-23 11:24:40
>>Tiktaa+nh1
Humans choose its loss function, then continue to guide it with finetuning/RL/etc.
replies(1): >>jbuhbj+QR6
◧◩◪
21. gilmor+si3[view] [source] [discussion] 2025-01-23 20:08:11
>>jstumm+762
Damn you're right, we should give Larry Ellison the benefit of the doubt.
replies(1): >>bdangu+nj3
◧◩◪◨
22. bdangu+nj3[view] [source] [discussion] 2025-01-23 20:15:31
>>gilmor+si3
not just larry - everyone who is 9-figure rich… no wonder they are better than all of us and hence demand that benefit… :)
◧◩◪
23. scottL+xM3[view] [source] [discussion] 2025-01-24 00:28:30
>>jstumm+762
Oh yes, Larry Ellison and Sam Altman are the real victims here!

I continue to be amazed at how desperate some of us are to live in Disney's Tomorrowland that we worship non-technical guys with lots of money who simply tell us that's what they're building, despite all actions to the contrary, sometimes baldfaced statements to the contrary (although always dressed up with faux-optimistic tones), and the negative anecdotes of pretty much anyone who gets close to them.

A lot of us became engineers because we were inspired by media, NASA, and the pretty pictures in Popular Science. And it sucks to realize that most if not all of that stuff isn't going to happen in our lifetimes, if at all. But you what guarantees it not to happen? Guys like Sam Altman and Larry Ellison at the helm, and blind faith that just because they have money and speak passionately that they somehow share your interests.

Or are you that guy who asks the car salesman for advice on which car he should buy? I could forgive that a little more, because the car salesman hasn't personally gone on the record about how he plans to use his business to fuck you.

24. dinkum+j54[view] [source] 2025-01-24 04:38:58
>>pizzat+(OP)
Unlimited energy? No, I don't believe in this. I thought people on HN generally accepted science and not nonsense. A "superintelligence" that would ... what? Destroy the middle, destroy the economy, cause riots and civil wars? If its even possible. Sounds great.
◧◩
25. dinkum+q54[view] [source] [discussion] 2025-01-24 04:41:32
>>scottL+A2
That's the crazy thing about this "super AI" business is that at some point no one would buy it because no one could afford because no one has a job (spare me the UBI magic money fantasy). I love the body positivity line. But if such a thing came to pass, I think something different would probably happen to the rich.
◧◩◪
26. dinkum+M54[view] [source] [discussion] 2025-01-24 04:46:12
>>dyausp+n61
I think we're going to see a lot of people with this kind of mental illness. It's really pretty sad to me.
replies(1): >>dyausp+pt6
◧◩◪◨
27. dyausp+pt6[view] [source] [discussion] 2025-01-25 05:41:10
>>dinkum+M54
What part of the above is irrational or illogical?
◧◩◪
28. jbuhbj+QR6[view] [source] [discussion] 2025-01-25 12:38:59
>>Ukv+e62
Once AGI is many times smarter than humans, the 'guiding' evaporates as foolish irrational thinking. There is no way around the fact when AGI acquires 10 times, 100, 1000 times human intelligence, we are suddenly completely powerless to change anything anymore.

AGI can go wrong in innumerable ways, most of which we cannot even imagine now, because we are limited by our 1 times human intelligence.

The liftoff conditions literally have to be near perfect.

So the question is, can humanity trust the power hungry billionaire CEOs to understand the danger and choose a path for maximum safety? Looking at how it is going so far, I would say absolutely not.

replies(1): >>Ukv+Xy8
◧◩◪◨
29. Ukv+Xy8[view] [source] [discussion] 2025-01-26 04:56:17
>>jbuhbj+QR6
> [...] 1000 times human intelligence, we are suddenly completely powerless [...] The liftoff conditions literally have to be near perfect.

I don't consider models suddenly lifting off and acquiring 1000 times human intelligence to be a realistic outcome. To my understanding, that belief is usually based around the idea that if you have a model that can refine its own architecture, say by 20%, then the next iteration can use that increased capacity to refine even further, say an additional 20%, leading to exponential growth. But that ignores diminishing returns; after obvious inefficiencies and low-hanging fruit are taken care of, squeezing out even an extra 10% is likely beyond what the slightly-better model is capable of.

I do think it's possible to fight against diminishing returns and chip away towards/past human-level intelligence, but it'll be through concerted effort (longer training runs of improved architectures with more data on larger clusters of better GPUs) and not an overnight explosion just from one researcher somewhere letting an LLM modify its own code.

> can humanity trust the power hungry billionaire CEOs to understand the danger and choose a path for maximum safety

Those power-hunger billionaire CEOs who shall remain nameless, such as Altman and Musk, are fear-mongering about such a doomsday. Goal seems to be regulatory capture and diverting attention away from the more realistic issues like use for employee surveillance[0].

[0]: https://www.bbc.co.uk/news/technology-55938494

[go to top]