zlacker

[parent] [thread] 6 comments
1. nopins+(OP)[view] [source] 2023-11-22 07:53:41
Both sides of the rift in fact care a great deal about AI Safety. Sam himself helped draft the OpenAI charter and structure its governance which focuses on AI Safety and benefits to humanity. The main reason of the disagreement is the approach they deem best:

* Sam and Greg appear to believe OpenAI should move toward AGI as fast as possible because the longer they wait, the more likely it would lead to the proliferation of powerful AGI systems due to GPU overhang. Why? With more computational power at one's dispense, it's easier to find an algorithm, even a suboptimal one, to train an AGI.

As a glimpse on how an AI can be harmful, this paper explores how LLMs can be used to aid in Large-Scale Biological Attacks https://www.rand.org/pubs/research_reports/RRA2977-1.html?

What if dozens other groups become armed with means to perform such an attack like this? https://en.wikipedia.org/wiki/Tokyo_subway_sarin_attack

We know that there're quite a few malicious human groups who would use any means necessary to destroy another group, even at a serious cost to themselves. So the widespread availability of unmonitored AGI would be quite troublesome.

* Helen and Ilya might believe it's better to slow down AGI development until we find technical means to deeply align an AGI with humanity first. This July, OpenAI started the Superalignment team with Ilya as a co-lead:

https://openai.com/blog/introducing-superalignment

But no one anywhere found a good technique to ensure alignment yet and it appears OpenAI's newest internal model has a significant capability leap, which could have led Ilya to make the decision he did. (Sam revealed during the APEC Summit that he observed the advance just a couple of weeks ago and it was only the fourth time he saw that kind of leap.)

replies(3): >>concor+A3 >>gorbyp+a7 >>zeroha+BD2
2. concor+A3[view] [source] 2023-11-22 08:20:24
>>nopins+(OP)
So Sam wants to make AGI without working to be sure it doesn't have goals higher than the preservation of human value?!

I can't believe that

replies(1): >>nopins+C4
◧◩
3. nopins+C4[view] [source] [discussion] 2023-11-22 08:29:52
>>concor+A3
No, I didn't say that. They formed the Superalignment team with Ilya as a co-lead (and Sam's approval) for that.

https://openai.com/blog/introducing-superalignment

I presume the current alignment approach is sufficient for the AI they make available to others and, in any event, GPT-n is within OpenAI's control.

4. gorbyp+a7[view] [source] 2023-11-22 08:47:43
>>nopins+(OP)
Honest question, but in your example above of Sam and Greg racing towards AGI as fast as possible in order to head off proliferation, what's the end goal when getting there? Short of capture the entire worlds economy with an ASI, thus preventing anyone else from developing one, I don't see how this works. Just because OpenAI (or whoever) wins the initial race, it doesn't seem obvious to me that all development on other AGIs stops.
replies(2): >>nopins+uc >>effica+lF
◧◩
5. nopins+uc[view] [source] [discussion] 2023-11-22 09:33:50
>>gorbyp+a7
I do not know exactly what they plan to do. But here's my thought...

Using a near-AGI to help align an ASI, then use the ASI to help prevent the development of unaligned AGI/ASI could be a means to a safer world.

◧◩
6. effica+lF[view] [source] [discussion] 2023-11-22 13:27:18
>>gorbyp+a7
part of the fanaticism here is that the first one to get an AGI wins because they can use its powerful intelligence to overcome every competitor and shut them down. they’re living in their own sci fi novel
7. zeroha+BD2[view] [source] 2023-11-22 23:00:00
>>nopins+(OP)
> Both sides of the rift in fact care a great deal about AI Safety.

I disagree. Yes, Sam may have when it OpenAI was founded (unless it was just a ploy), but certainly now it's clear that the big companies are on a race to the top and safety or guardrails are mostly irrelevant.

The primary reason that the Anthropic team left OpenAI was over safety concerns.

[go to top]