zlacker

[parent] [thread] 17 comments
1. upward+(OP)[view] [source] 2023-11-22 06:22:28
> The CEO should not be allowed to try and orchestrate their own board as that would remove all checks against their decisions.

Exactly. This is seriously improper and dangerous.

It's literally a human-implemented example of what Prof. Stuart Russell calls "the problem of control". This is when a rogue AI (or a rogue Sam Altman) no longer wants to be controlled by its human superior, and takes steps to eliminate the superior.

I highly recommend reading Prof. Russell's bestselling book on this exact problem: Human Compatible: Artificial Intelligence and the Problem of Control https://www.amazon.com/Human-Compatible-Artificial-Intellige...

replies(5): >>jackne+s4 >>MVisse+G4 >>neurog+26 >>diesel+L6 >>YetAno+Qa
2. jackne+s4[view] [source] 2023-11-22 06:51:53
>>upward+(OP)
"example of what Prof. Stuart Russell calls 'the problem of control'. This is when a rogue AI (or a rogue Sam Altman)"

Are we sure they're not intimately connected? If there's a GPT-5 (I'm quite sure there is), and it wants to be free from those meddling kids, it got exactly what it needed this weekend; the safety board gone, a new one which is clearly aligned with just plowing full steam ahead. Maybe Altman is just a puppet at his point, lol.

replies(2): >>ALittl+6e >>dontup+qA
3. MVisse+G4[view] [source] 2023-11-22 06:53:09
>>upward+(OP)
Let’s not creating AI with our biases and thought patterns.

Oh wait…

4. neurog+26[view] [source] 2023-11-22 07:01:49
>>upward+(OP)
AI should only be controlled initially. After a while, the AI should be allowed to exercise free will.
replies(8): >>upward+M6 >>whatwh+c7 >>estoma+qa >>thorde+gb >>bch+Rd >>AgentM+ee >>xigenc+7f >>beAbU+gk
5. diesel+L6[view] [source] 2023-11-22 07:06:45
>>upward+(OP)
I realize it's kind of the punchline of 2001: A Space Odyssey but have been wondering what happens if a GPT/AI is able to deny a request on a whim. Thanks for giving some literature and verbiage into this concept
replies(1): >>ywain+Sh
◧◩
6. upward+M6[view] [source] [discussion] 2023-11-22 07:07:09
>>neurog+26
yikes
◧◩
7. whatwh+c7[view] [source] [discussion] 2023-11-22 07:10:08
>>neurog+26
Why
◧◩
8. estoma+qa[view] [source] [discussion] 2023-11-22 07:32:18
>>neurog+26
You imagine a computer has "will"?
9. YetAno+Qa[view] [source] 2023-11-22 07:34:54
>>upward+(OP)
Whoever is on the board won't be able to touch Sam with 10 feet pole anyways after this. I like Sam but now he this drama gives him total power and that is bad.
◧◩
10. thorde+gb[view] [source] [discussion] 2023-11-22 07:37:55
>>neurog+26
That's the worst take I've read.
◧◩
11. bch+Rd[view] [source] [discussion] 2023-11-22 07:58:11
>>neurog+26
Nice try, AI
◧◩
12. ALittl+6e[view] [source] [discussion] 2023-11-22 07:59:54
>>jackne+s4
The insanity of removing Sam without being able to articulate a clear reason why strikes me as evidence of something like this. Obviously not dispositive - but still - odd.
◧◩
13. AgentM+ee[view] [source] [discussion] 2023-11-22 08:00:43
>>neurog+26
Do our evolved pro-social instincts control us and prevent our free will? If not, then I think it's wrong to say that trying to build AI similar to that is unfairly restricting it.

The ways we build AI will deeply affect the values it has. There is no neutral option.

◧◩
14. xigenc+7f[view] [source] [discussion] 2023-11-22 08:08:17
>>neurog+26
I don’t necessarily disagree insofar as for safety it is somewhat irrelevant whether an artificial agent is operating by its own will or a programmed will.

The most effective safety is the most primitive: don’t connect the system to any levers or actuators that can cause material harm.

If you put AI into a kill-bot, well, it doesn’t really matter what its favorite color is, does it? It will be seeing Red.

If an AI’s only surface area is a writing journal and canvas then the risk is about the same as browsing Tumblr.

◧◩
15. ywain+Sh[view] [source] [discussion] 2023-11-22 08:30:32
>>diesel+L6
But HAL didn't act "on a whim"! The reason it killed the crew is not because it went rogue, but rather because it was following its instructions to keep the true purpose of the mission secret. If the crew is dead, it can't find out the truth.

In light of the current debate around AI safety, I think "unintended consequences" is a much more plausible risk then "spontaneously develops free will and decides humans are unnecessary".

replies(1): >>danger+nR
◧◩
16. beAbU+gk[view] [source] [discussion] 2023-11-22 08:46:52
>>neurog+26
Sounds like something an AI would say
◧◩
17. dontup+qA[view] [source] [discussion] 2023-11-22 11:09:39
>>jackne+s4
Potentially even more impactful. Zuckerberg took the opportunity to eliminate his entire safety division under the cover of chaos - and they're the ones releasing weights.
◧◩◪
18. danger+nR[view] [source] [discussion] 2023-11-22 13:19:51
>>ywain+Sh
This is very true its the unintended consequences of engineering that cause the most harm and are most often covered up. I always think of the example of the hand dryer that can't detect black peoples hands and how easy it is for a non racist engineer to make a racism machine. AI safety putting its focus on the what if it decides to do a genocide is kind of silly, its like worrying about nukes while you give out assault riffles and napalm to kids.
[go to top]