zlacker

[parent] [thread] 8 comments
1. ilaksh+(OP)[view] [source] 2023-07-05 21:39:26
I just think it's much easier to convince people that existing types of AIs will get somewhat smarter and significantly faster. And that's dangerous enough.

My own belief is that regardless of what we do in terms of the most immediate dangers, within one or two centuries (maximum) we will enter the posthuman era where digital intelligent life has taken control of the planet. I don't mean "posthuman" as in all of the humans have been killed (necessarily), just that what humans 1.0 do won't be very important or interesting relative to what the superintelligent AIs are doing.

I don't think there is anything that prevents people from giving AI all of the characteristics of animals (such as humans). I think it's foolish, but researchers seem determined to do it.

But this is fairly speculative and much harder to convince people of.

replies(1): >>c_cran+m
2. c_cran+m[view] [source] 2023-07-05 21:41:35
>>ilaksh+(OP)
If the value of superintelligence is to lead to an Age of Em scenario where AIs (or Ems) do most of the intellectual labor, the reality is still that they would be doing this labor in service of humans. I could see a scenario where it is done in service of the AIs instead, but it would look nothing like the existential risk stuff bandied about by these weenies.
replies(2): >>Footke+MQ >>flagra+GS
◧◩
3. Footke+MQ[view] [source] [discussion] 2023-07-06 03:34:25
>>c_cran+m
There is no example in our knowledge of any lifeform prioritizing (writ large) the well-being of a different lifeform over its own.
replies(1): >>c_cran+oP1
◧◩
4. flagra+GS[view] [source] [discussion] 2023-07-06 03:48:08
>>c_cran+m
How do you jump to this? What is it that would inherently lead an intelligent species dramatically smarter than us to stay focused on servicing us?

We humans sure didn't do this. We're genetically extremely similar to other primates and yet we destroy their habitats, throw them in zoos, and use them for lab experiments.

replies(1): >>c_cran+GO1
◧◩◪
5. c_cran+GO1[view] [source] [discussion] 2023-07-06 12:03:36
>>flagra+GS
Currently, LLMs seem to prioritize their current goal, so if the goal is solving math puzzles or genetic problems, they would probably keep doing that too.
replies(1): >>flagra+Ss3
◧◩◪
6. c_cran+oP1[view] [source] [discussion] 2023-07-06 12:09:07
>>Footke+MQ
Why call AIs a life form? They aren't like cellular life.
replies(1): >>ilaksh+vQ2
◧◩◪◨
7. ilaksh+vQ2[view] [source] [discussion] 2023-07-06 16:28:28
>>c_cran+oP1
I think the assumption they were making was that rather than an LLM this was a type of AI that has animal-like characteristics. Which sounds fanciful but at least at a functional level you could get some main aspects just by removing guardrails from a large multimodal model and instructing it to work on its own goals, self preservation, etc. And researchers are working hard to create more lifelike systems that wouldn't necessarily be very similar to LLMs.
replies(1): >>c_cran+RU2
◧◩◪◨⬒
8. c_cran+RU2[view] [source] [discussion] 2023-07-06 16:42:22
>>ilaksh+vQ2
The animal like systems might be interesting to observe, but it doesn't sound like they would be useful for doing much work. I am not sure where the reliance on them would come in.
◧◩◪◨
9. flagra+Ss3[view] [source] [discussion] 2023-07-06 18:47:06
>>c_cran+GO1
I'd love to be able to see more about how the main LLMs are really trained and limited with regards to their goals and scoring algorithms.

It seems reasonable that they wouldn't deviate, but that depends on how specifically and wholly the original goals were defined. We'd basically be attempting to outwit the LLMs, I'm not sure if that's realistic or not.

[go to top]