zlacker

[return to "We have reached an agreement in principle for Sam to return to OpenAI as CEO"]
1. shubha+B7[view] [source] 2023-11-22 06:50:16
>>staran+(OP)
At the end of the day, we still don't know what exactly happened and probably, never will. However, it seems clear there was a rift between Rapid Commercialization (Team Sam) and Upholding the Original Principles (Team Helen/Ilya). I think the tensions were brewing for quite a while, as it's evident from an article written even before GPT-3 [1].

> Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration

Team Helen acted in panic, but they believed they would win since they were upholding the principles the org was founded on. But they never had a chance. I think only a minority of the general public truly cares about AI Safety, the rest are happy seeing ChatGPT helping with their homework. I know it's easy to ridicule the sheer stupidity the board acted with (and justifiably so), but take a moment to think of the other side. If you truly believed that Superhuman AI was near, and it could act with malice, won't you try to slow things down a bit?

Honestly, I myself can't take the threat seriously. But, I do want to understand it more deeply than before. Maybe, it isn't without substance as I thought it to be. Hopefully, there won't be a day when Team Helen gets to say, "This is exactly what we wanted to prevent."

[1]: https://www.technologyreview.com/2020/02/17/844721/ai-openai...

◧◩
2. pug_mo+Cb[view] [source] 2023-11-22 07:15:57
>>shubha+B7
I'm convinced there is a certain class of people who gravitate to positions of power, like "moderators", (partisan) journalists, etc. Now, the ultimate moderator role has now been created, more powerful than moderating 1000 subreddits - the AI safety job who will control what AI "thinks"/says for "safety" reasons.

Pretty soon AI will be an expert at subtly steering you toward thinking/voting for whatever the "safety" experts want.

It's probably convenient for them to have everyone focused on the fear of evil Skynet wiping out humanity, while everyone is distracted from the more likely scenario of people with an agenda controlling the advice given to you by your super intelligent assistant.

Because of X, we need to invade this country. Because of Y, we need to pass all these terrible laws limiting freedom. Because of Z, we need to make sure AI is "safe".

For this reason, I view "safe" AIs as more dangerous than "unsafe" ones.

◧◩◪
3. PeterS+Qc[view] [source] 2023-11-22 07:25:26
>>pug_mo+Cb
Most of those touting "safety" do not want to limit their access to and control of powerfull AI, just yours .
◧◩◪◨
4. vkou+zd[view] [source] 2023-11-22 07:31:14
>>PeterS+Qc
Meanwhile, those working on commercialization are by definition going to be gatekeepers and beneficiaries of it, not you. The organizations that pay for it will pay for it to produce results that are of benefit to them, probably at my expense [1].

Do I think Helen has my interests at heart? Unlikely. Do Sam or Satya? Absolutely not!

[1] I can't wait for AI doctors working for insurers to deny me treatments, AI vendors to figure out exactly how much they can charge me for their dynamically-priced product, AI answering machines to route my customer support calls through Dante's circles of hell...

◧◩◪◨⬒
5. konsch+Rg[view] [source] 2023-11-22 07:55:47
>>vkou+zd
> produce results that are of benefit to them, probably at my expense

The world is not zero-sum. Most economic transactions benefit both parties and are a net benefit to society, even considering externalities.

◧◩◪◨⬒⬓
6. vkou+jj[view] [source] 2023-11-22 08:15:11
>>konsch+Rg
> The world is not zero-sum.

No, but some parts of it very much are. The whole point of AI safety is keeping it away from those parts of the world.

How are Sam and Satya going to do that? It's not in Microsoft's DNA to do that.

◧◩◪◨⬒⬓⬔
7. concor+yk[view] [source] 2023-11-22 08:24:47
>>vkou+jj
> The whole point of AI safety is keeping it away from those parts of the world.

No, it's to ensure it doesn't kill you and everyone you love.

◧◩◪◨⬒⬓⬔⧯
8. vkou+oq[view] [source] 2023-11-22 09:11:58
>>concor+yk
My concern isn't some kind of run-away science-fantasy Skynet or gray goo scenario.

My concern is far more banal evil. Organizations with power and wealth using it to further consolidate their power and wealth, at the expense of others.

◧◩◪◨⬒⬓⬔⧯▣
9. Feepin+Qq[view] [source] 2023-11-22 09:16:57
>>vkou+oq
Yes well, then your concern is not AI safety.
◧◩◪◨⬒⬓⬔⧯▣▦
10. vkou+ys[view] [source] 2023-11-22 09:30:28
>>Feepin+Qq
You're wrong. This is exactly AI safety, as we can see from the OpenAI charter:

> Broadly distributed benefits

> We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

Hell, it's the first bullet point on it!

You can't just define AI safety concerns to be 'the set of scenarios depicted in fairy tales', and then dismiss them as 'well, fairy tales aren't real...'

◧◩◪◨⬒⬓⬔⧯▣▦▧
11. Feepin+0A[view] [source] 2023-11-22 10:36:37
>>vkou+ys
Sure, but conversely you can say "ensuring that OpenAI doesn't get to run the universe is AI safety" (right) but not "is the main and basically only part of AI safety" (wrong). The concept of AI safety spans lots of threats, and we have to avoid all of them. It's not enough to avoid just one.
[go to top]