zlacker

[return to "Introducing Superalignment"]
1. ilaksh+nk[view] [source] 2023-07-05 18:14:37
>>tim_sw+(OP)
You have to give them credit for putting their money where their mouth is here.

But it's also easy to parody this. I am just imagining Ilya and Jan coming out on stage wearing red capes.

I think George Hotz made sense when he pointed out that the best defense will be having the technology available to everyone rather than a small group. We can at least try to create a collective "digital immune system" against unaligned agents with our own majority of aligned agents.

But I also believe that there isn't any really effective mitigation against superintelligence superseding human decision making aside from just not deploying it. And it doesn't need to be alive or anything to be dangerous. All you need is for a large amount of decision-making for critical systems to be given over to hyperspeed AI and that creates a brittle situation where things like computer viruses can be existential risks. It's something similar to the danger of nuclear weapons.

Even if you just make GPT-4 say 33% smarter and 50 or 100 times faster and more efficient, that can lead to control of industrial and military assets being handed over to these AI agents. Because the agents are so much faster, humans cannot possibly compete, and if you interrupt them to try to give them new instructions then your competitor's AIs race ahead the equivalent of days or weeks of work. This, again, is a precarious situation to be in.

There is huge promise and benefit from making the systems faster, smarter, and more efficient, but in the next few years we may be walking a fine line. We should agree to place some limitation on the performance level of AI hardware that we will design and manufacture.

◧◩
2. Jimthe+Yk[view] [source] 2023-07-05 18:16:28
>>ilaksh+nk
"Even if you just make GPT-4 say 33% smarter and 50 or 100 times faster and more efficient, that can lead to control of industrial and military assets being handed over to these AI agents."

I call BS on this...it's an LLM...

◧◩◪
3. chaxor+ip[view] [source] 2023-07-05 18:33:02
>>Jimthe+Yk
It's important to recognize that the model is fully capable of operating in open world environments, with visual stimuli and motor output, go achieve high level tasks. This has been demonstrated in proofs of concepts several times now with systems such as voyager et al. So, while there are certainly some details that are important, much of them are the annoyances that we devs deal with all the time (how to connect various parts of a system properly, etc) the fundamental capabilities of expressivity in these models are not that limited. Certainly limited in some sense (as seen in the several papers applying category theoretic arguments to transformers) but for many engineering applications in the world, these models are very capable and useful.

Guarantees of correctness and safety are obviously of huge concern, hence the main article. But it's absolutely not unreasonable to see these models allowing humanoid robots capable of various day to day activities and work.

◧◩◪◨
4. jgalt2+Ot1[view] [source] 2023-07-06 00:07:20
>>chaxor+ip
> It's important to recognize that the model is fully capable of operating in open world environment

How so? If they cannot drive a car?

◧◩◪◨⬒
5. chaxor+3W9[view] [source] 2023-07-08 05:24:34
>>jgalt2+Ot1
What evidence do you have that allows you to make the assertion that they 'cannot drive a car'?
[go to top]