zlacker

[parent] [thread] 11 comments
1. skepti+(OP)[view] [source] 2023-07-05 17:55:07
Why are they starting to sound more and more cult-like? This is an incredibly unscientific blog post. I get that they are a private company now, but why even release something like this without further details?
replies(3): >>batman+69 >>seydor+1i >>cubefo+lY
2. batman+69[view] [source] 2023-07-05 18:26:25
>>skepti+(OP)
Because the "AGI" pursuit is at least as much a faith movement as it is a rational engineering program. If you examine it more deeply, the faith object isn't even the conjectured inevitable AGI, it's exponential growth curves. (That is of course true for startup culture more generally, from which the current AI boom is an outgrowth.) For my money, The Singularity is Near still counts as the ur-text that the true believers will never let go, even though Kurzweil was summarizing earlier belief trends.

It's just a pity that the creepy doomer weirdos so thoroughly squatted the term "rationalist." It would be interesting to see the perspective on these people 100 years hence, or even 50. I don't doubt there will still be remnant believers who end up moderating and sanitizing their beliefs, much like the Seventh Day Adventists or the Mormons.

replies(2): >>zzzzzz+Iq >>dontpa+Mb1
3. seydor+1i[view] [source] 2023-07-05 18:59:19
>>skepti+(OP)
they have been doing that the entire year
◧◩
4. zzzzzz+Iq[view] [source] [discussion] 2023-07-05 19:36:20
>>batman+69
you don't need to believe in exponential growth per se, all you really need to believe is that humans aren't that capable relative to what could be in principle built - it's entirely possible logistic growth may be more than enough to get us very far past human ability once the right paradigm is discovered.
replies(1): >>trasht+n01
5. cubefo+lY[view] [source] 2023-07-05 22:22:31
>>skepti+(OP)
It's the other way round: Just accusing people of being in a cult is unscientific. There are plenty of arguments that AI x-risk is real.

E.g. by Yoshua Bengio: https://yoshuabengio.org/2023/06/24/faq-on-catastrophic-ai-r...

◧◩◪
6. trasht+n01[view] [source] [discussion] 2023-07-05 22:33:36
>>zzzzzz+Iq
Exactly. All that is needed for AGI to eventually be developed, is that humans do NOT have some magical or divine essence that set us apart from the material world.

Now the _timeline_ of AGI could be anything from the a few years to millennia, at least if evaluated 40 years ago. Now, though, it really doesnt seem very distant.

replies(1): >>akomtu+Ca2
◧◩
7. dontpa+Mb1[view] [source] [discussion] 2023-07-05 23:43:16
>>batman+69
It absolutely has become a new faith. A lot of the cryptocurrency faith healers moved into the space as that grift began to collapse and moved from copying the prosperity gospel to apocalyptic "the end is nigh, repent for the second coming of Christ is at hand" type preachers. LLMs that these people envision are not about intelligence. They're about creating a God you can pray to and it answers back, wrapped in a veil of scientism. It's a slightly more advanced version of 4chan users worshiping Inglip.
◧◩◪◨
8. akomtu+Ca2[view] [source] [discussion] 2023-07-06 07:48:46
>>trasht+n01
That's going to be the key principle of the new religion invented by AI: that humans are just like machines, and AI is the supreme machine.
replies(1): >>trasht+ud2
◧◩◪◨⬒
9. trasht+ud2[view] [source] [discussion] 2023-07-06 08:13:44
>>akomtu+Ca2
Well, assuming superintelligence emerges AND we don't find any evidence of anything supernatural inside human brains, what does "machine" even mean at that point?

Anyway, when/if AIs start to create the narrative to the extent they can author our religion, they already have full control. At least if some AI decides to become a God with us as worshippers, at least we stay around a bit longer.

Probably a better outcome than if it decides that it has better uses for the atoms in our bodies.

In fact, this may even be a solution to the alignment problem. An all-powerful but relatively harmless (non-mutating, non-evolving, non-reproducing) single AI that creates and enforces a religion that prevents us from creating dangerous AI's, weapons of mass destruction, gray goo or destroy the planet, while promoting pro-social behaviour while otherwise leaving us free to do mostly what we want to do.

replies(2): >>akomtu+a64 >>ben_w+jl4
◧◩◪◨⬒⬓
10. akomtu+a64[view] [source] [discussion] 2023-07-06 17:53:54
>>trasht+ud2
A distinct property of a machine is determinism. Machines are made of electrons that obey the wave equation: it states that all electrons evolve as one. However we skillfully limit this waveness and squash electrons into super deterministic 01 transistors. We need it because our computing paradigm revolves around determinism. A more general theory that works with probabilities and electrons as they are hasn't been developed yet. And here lies the danger: if a super deterministic AI becomes the ruler of our society, it will prevent further growth. The same phenomenon on individual level is called arrested development: when the individual over-develops a minor skill, becomes obsessed with it and it blinds him from exploring anything else.

A still possible future is when science makes a breakthru in quantum computing and finds a way for humans to steer AI with their minds: it would be the neuralink in reverse. This would force science to research the true nature of that connection which will help to avert the AI doom.

replies(1): >>trasht+Ya5
◧◩◪◨⬒⬓
11. ben_w+jl4[view] [source] [discussion] 2023-07-06 18:54:45
>>trasht+ud2
Rings a bell: "Thou Shalt Not Travel Into Thine Own Past Lightcone", unfortunately the search results are now overwhelming "NovelAI" and actual physics, so I can't find the name of the story I'm thinking of (and have yet to actually read).
◧◩◪◨⬒⬓⬔
12. trasht+Ya5[view] [source] [discussion] 2023-07-06 22:35:30
>>akomtu+a64
I'm not aware of any evidence that human brains involve any quantum computing. In fact, I'm pretty sure there's enough noise in the brain that almost instant wavefunction collapse/decoherence is guaranteed.

Anyway, a combination of noisy data entering from the outside world and chaos theory/mathmatics in the equations of most computer systems, I don't see any risk that AI's should get stuck in an infinitely repeating pattern.

[go to top]