zlacker

[parent] [thread] 33 comments
1. arisAl+(OP)[view] [source] 2025-01-22 08:57:52
This has cosmological significance if it leads to superintelligence
replies(3): >>iLoveO+e2 >>Cthulh+E6 >>comput+C7
2. iLoveO+e2[view] [source] 2025-01-22 09:18:29
>>arisAl+(OP)
Don't worry, it'll only lead to superstupidity.
replies(2): >>bluesc+d3 >>_heimd+4z
◧◩
3. bluesc+d3[view] [source] [discussion] 2025-01-22 09:27:24
>>iLoveO+e2
And superplagiarism of human-created content
replies(1): >>Xenoph+c7
4. Cthulh+E6[view] [source] 2025-01-22 09:56:46
>>arisAl+(OP)
It won't unless there's another (r)evolution in the underlying technology / science / algorithms, at this point scaling up just means they use bigger datasets or more iterations, but it's more finetuning and improving the existing output then coming up with a next generation / superintelligence.
replies(3): >>iLoveO+47 >>Fillig+x9 >>miki12+dy
◧◩
5. iLoveO+47[view] [source] [discussion] 2025-01-22 10:00:46
>>Cthulh+E6
> bigger datasets

Not even, they already ran out of data.

replies(1): >>nick__+Cl
◧◩◪
6. Xenoph+c7[view] [source] [discussion] 2025-01-22 10:01:49
>>bluesc+d3
I'm sure this will age well.
7. comput+C7[view] [source] 2025-01-22 10:05:02
>>arisAl+(OP)
"this generation shall not pass"... to me that's about as credible as wanting to "preserve human consciousness" by going to Mars.

Setting the world on fire and disrupting societies gleefully, while basically building bunkers (figuratively more than literally) and consolidating surveillance and propaganda to ride out the cataclysm, that's what I'm seeing.

And the stories to sell people on continuing to put up with that are not even good IMO. Just because the people who use the story to consolidate wealth and control are excited about that, we're somehow expected to be excited about the promise of a pair of socks made from barbed wire they gave us for Christmas. It's the narcissistic experience: "this is shit. this benefits you, not me. this hurts me."

One thing is sure, actual intelligence, regardless of how you may define it, something that is able to reason and speak freely, is NOT what people who fire engineers for correcting them want. It's not about a sort of oracle for humanity to enjoy and benefit from, that just speaks "truth".

◧◩
8. Fillig+x9[view] [source] [discussion] 2025-01-22 10:24:10
>>Cthulh+E6
Okay, but let’s be pessimistic for a moment. What can we do if that revolution does happen, and they’re close to AGI?

I don’t believe the control problem is solved, but I’m not sure it would matter if it is.

replies(1): >>ForHac+3a
◧◩◪
9. ForHac+3a[view] [source] [discussion] 2025-01-22 10:29:33
>>Fillig+x9
Being pessimistic, how come no human supergeniuses ever took over the world? Why didn't Leibniz make everyone else into his slaves?

I don't even understand what the proposed mechanism for "rouge AI enslaves humanity" is. It's scifi (and not hard scifi) as far as I can see.

replies(4): >>Philpa+tb >>z3phyr+kf >>Heatra+3h >>arisAl+jJ1
◧◩◪◨
10. Philpa+tb[view] [source] [discussion] 2025-01-22 10:43:31
>>ForHac+3a
Once you have one AGI, you can scale it to many AGI as long as you have the necessary compute. An AGI never needs to take breaks, can work non-stop on a problem, has access to all of the world's information simultaneously, and can interact with any system it's connected to.

To put it simply, it could outcompete humanity on every metric that matters, especially given recent advancements in robotics.

replies(1): >>ForHac+oe
◧◩◪◨⬒
11. ForHac+oe[view] [source] [discussion] 2025-01-22 11:18:25
>>Philpa+tb
...so it can think really hard all the time and come up with lots of great, devious evil ideas?

Again, I wonder why no group of smart people with brilliant ideas has unilaterally imposed those ideas on the rest of humanity through sheer force of genius.

replies(3): >>Philpa+Df >>lupire+3x >>jprete+6B
◧◩◪◨
12. z3phyr+kf[view] [source] [discussion] 2025-01-22 11:27:00
>>ForHac+3a
I consider many successful military leaders and politicians to be geniuses as well. In my books, Caesar is as genius as Newton!

Having said that, we do not to understand the world to exploit it for ourselves. And what better way to understand and exploit the universe than science? Its an endearment.

◧◩◪◨⬒⬓
13. Philpa+Df[view] [source] [discussion] 2025-01-22 11:29:57
>>ForHac+oe
An equivalent advance in autonomous robotics would solve the force projection issue, if that's what you're getting at.

I don't know if this will happen with any certainty, but the general idea of commoditising intelligence very much has the ability to tip the world order: every problem that can be tackled by throwing brainpower at it will be, and those advances will compound.

Also, the question you're posing did happen: it was called the Manhattan Project.

replies(2): >>redser+ov >>ForHac+Rw5
◧◩◪◨
14. Heatra+3h[view] [source] [discussion] 2025-01-22 11:44:08
>>ForHac+3a
> Being pessimistic, how come no human supergeniuses ever took over the world? Why didn't Leibniz make everyone else into his slaves?

We already did. Look at the state of animals today vs <1 mya. Bovines grown in unprecedented mass numbers to live short lives before slaughter. Wolves bred into an all new animal, friendly and helpful to the dominate species. Previously apex predators with claws, teeth, speed and strength, rendered extinct.

replies(1): >>adalac+jl
◧◩◪◨⬒
15. adalac+jl[view] [source] [discussion] 2025-01-22 12:19:12
>>Heatra+3h
Sometimes I wonder if we are going to be the unkillable plague that takes over the universe. Or maybe we will dissappear in a blink. It's hard to know, we don't have any reference point except ourselves.
replies(1): >>lupire+Jw
◧◩◪
16. nick__+Cl[view] [source] [discussion] 2025-01-22 12:21:08
>>iLoveO+47
I am sure that the M.I.C. have a ton of classified data that could be used to train a military AI.
◧◩◪◨⬒⬓⬔
17. redser+ov[view] [source] [discussion] 2025-01-22 13:29:57
>>Philpa+Df
And if this whole exercise turns out to be a flop and gets us absolutely nowhere closer to AGI?

“AGI” has proven to be today’s hot marketing stunt for when you need to raise another round of cash and your only viable product is optimism.

Flying cars were just around the corner in the 60s, too.

replies(2): >>anon84+wJ >>arisAl+uJ1
◧◩◪◨⬒⬓
18. lupire+Jw[view] [source] [discussion] 2025-01-22 13:38:39
>>adalac+jl
Destroying human life in Earth (the only habitable place in the solar system) is far far easier than reaching something outside the solar system.
◧◩◪◨⬒⬓
19. lupire+3x[view] [source] [discussion] 2025-01-22 13:40:09
>>ForHac+oe
Look at any corporation or government to understand how a large group of humans can be driven to do specific things none of them individually want.
◧◩
20. miki12+dy[view] [source] [discussion] 2025-01-22 13:46:08
>>Cthulh+E6
> It won't unless there's another (r)evolution in the underlying technology / science

I think reinforcement learning with little to no human feedback, O-1 / R-1 style, might be that revolution.

replies(2): >>nkings+k21 >>tallda+Xk2
◧◩
21. _heimd+4z[view] [source] [discussion] 2025-01-22 13:52:20
>>iLoveO+e2
Is that the prequel to Idiocracy?
◧◩◪◨⬒⬓
22. jprete+6B[view] [source] [discussion] 2025-01-22 14:04:08
>>ForHac+oe
Quite a few have succeeded in conquering large fractions of the Earth's population: Napoleon, Hitler, Genghis Khan, the Roman emperors, Alexander the Great, Mao Zedong. America and Britain as systems did so for long periods of time.

All of these entities would have been enormously more powerful with access to an AGI's immortality, sleeplessness, and ability to clone itself.

replies(2): >>Sketch+EG >>anon84+SJ
◧◩◪◨⬒⬓⬔
23. Sketch+EG[view] [source] [discussion] 2025-01-22 14:37:01
>>jprete+6B
I can see what you're trying to say, but I cannot for the life of me figure out how an AGI would have helped Alexander the Great.
replies(1): >>jprete+dL
◧◩◪◨⬒⬓⬔⧯
24. anon84+wJ[view] [source] [discussion] 2025-01-22 14:54:52
>>redser+ov
This thread started from a deliberately pessimistic hypothetical of what happens if AGI actually manifests, so your comment is misplaced.
◧◩◪◨⬒⬓⬔
25. anon84+SJ[view] [source] [discussion] 2025-01-22 14:56:54
>>jprete+6B
And of course the more society is wired up and controlled by computer systems, the more the AGI could directly manage it.
◧◩◪◨⬒⬓⬔⧯
26. jprete+dL[view] [source] [discussion] 2025-01-22 15:05:13
>>Sketch+EG
Alexander the Great made his conquests by building a really good reputation for war, then leveraging it to get tribute agreements while leaving the local governments intact. This is a good way to do it when communication lines are slow and unreliable, because the emperor just needs to check tribute once a year to enforce the agreements, but it's weak control.

If Alexander could have left perfectly aligned copies of himself in every city he passed, he could have gotten much more control and authority, and still avoided a fight by agreeing to maintain the local power structure with himself as the new head of state.

replies(1): >>Sketch+qM
◧◩◪◨⬒⬓⬔⧯▣
27. Sketch+qM[view] [source] [discussion] 2025-01-22 15:11:00
>>jprete+dL
Oh, you're assuming an entire networking infrastructure as well. That makes way more sense, but the miracle there isn't AGI - without networking they'd lose alignment over time. Honestly, I feel like it would devolve in a patchwork of different kingdoms run by an Alexander figurehead... where have I seen this before?

The problem you're proposing could be solved via a high quality cellular network.

◧◩◪
28. nkings+k21[view] [source] [discussion] 2025-01-22 16:37:46
>>miki12+dy
There is lots of human feedback. This isn’t a game with an end state that it can easily play against itself. It needs problems with known solutions, or realistic simulations. This is why people wonder if our own universe is a simulation for training an asi.
◧◩◪◨
29. arisAl+jJ1[view] [source] [discussion] 2025-01-22 20:44:00
>>ForHac+3a
This is profoundly and disturbingly bad argument.

1)Leibniz wasn't superhuman 2) Leibniz couldn't work 24/7 3) he could not self increase the speed of his own hardware (body) 4) he could not spawn 1 trillion copies of him to work 24/7

Like how much time did you think before writing this

replies(1): >>ForHac+Lw5
◧◩◪◨⬒⬓⬔⧯
30. arisAl+uJ1[view] [source] [discussion] 2025-01-22 20:45:04
>>redser+ov
You really haven't used any LLM seriously eh
◧◩◪
31. tallda+Xk2[view] [source] [discussion] 2025-01-23 01:32:12
>>miki12+dy
I think gluing wings to a pig will make it fly. Show me examples or stop the conjecture.
◧◩◪◨⬒
32. ForHac+Lw5[view] [source] [discussion] 2025-01-24 10:15:52
>>arisAl+jJ1
Again, my reaction is... so what?

A trillion hyperintelligent demons might be cogitating right now on the head of a pin. You can't prove they aren't thinking up all sorts of genius evil schemes. My point is that "intelligence" has never been a sufficient - or even necessary - component of imposing ones will on humans.

I feel like HN/EA/"Grey Tribe" people fail to see this because they so worship intellect. I'm much more likely to fall victim to a big dumb man than smart computers.

replies(1): >>arisAl+1Z5
◧◩◪◨⬒⬓⬔
33. ForHac+Rw5[view] [source] [discussion] 2025-01-24 10:18:27
>>Philpa+Df
So don't plug the smart evil computer into the strong robots? Great, AI apocalypse averted.

The Manhattan Project would be a cute example if the Los Alamos scientists had gone rogue and declared themselves emperors of mankind, but no, in fact the people in charge remained the people in charge - mostly not supergeniuses.

◧◩◪◨⬒⬓
34. arisAl+1Z5[view] [source] [discussion] 2025-01-24 15:02:09
>>ForHac+Lw5
huh what ? this is a whole new level of flat earther thinking. you actually believe that apex predators are not the most intelligent? Like humans became the apex because of something else than intelligence? AI and covid showed humanity what levels of wacky stuff people believe. I am not trying to convince you, thank you for showing me this perspective :)
[go to top]