zlacker

[parent] [thread] 12 comments
1. tfehri+(OP)[view] [source] 2023-11-18 23:13:15
My main concern is that a new Altman-led AI company would be less safety-focused than OpenAI. I think him returning to OpenAI would be better for AI safety, hard to say whether it would be better for AI progress though.
replies(3): >>apalme+v >>noober+L >>silenc+p2
2. apalme+v[view] [source] 2023-11-18 23:15:25
>>tfehri+(OP)
This is valid thought process BUT Altman is not going to come back without the other faction being neutered. It just would not make any sense.
replies(1): >>coffee+P6
3. noober+L[view] [source] 2023-11-18 23:16:43
>>tfehri+(OP)
openai literally innovated all of this in their current conditions, so they are sufficient
4. silenc+p2[view] [source] 2023-11-18 23:24:20
>>tfehri+(OP)
Okay, this is honestly annoying. What is this thing with the word "safety" becoming some weasel word when it comes to AI discussions?

What exactly do YOU mean by safety? That they go at the pace YOU decide? Does it mean they make a "safe space" for YOU?

I've seen nothing to suggest they aren't "being safe". Actually ChatGPT has become known for censoring users "for their own good" [0].

The argument I've seen is: one "side" thinks things are moving too fast, therefore the side that wants to move slower is the "safe" side.

And that's it.

[0]: https://www.youtube.com/watch?v=jvWmCndyp9A&t

replies(3): >>stale2+h3 >>threes+e5 >>kordle+v9
◧◩
5. stale2+h3[view] [source] [discussion] 2023-11-18 23:29:03
>>silenc+p2
> What exactly do YOU mean by safety? That they go at the pace YOU decide?

Usually what it means is that they think that AI has a significant chance of literally ending the world with like diamond nanobots or something.

All opinions and recommendations follow from this doomsday cult belief.

replies(1): >>smegge+ed
◧◩
6. threes+e5[view] [source] [discussion] 2023-11-18 23:39:23
>>silenc+p2
There is a common definition of safety that applies to most of the world.

Which is that any AI is not racist, misogynistic, aggressive etc. It does not recommend to people that they act in an illegal, violent or self-harming way or commit those acts itself. It does not support or promote nazism, fascism etc. Similar to how companies deal treat ad/brand safety.

And you may think of it as a weasel word. But I assure you that companies and governments e.g. EU very much don't.

replies(3): >>wruza+lh >>Amezar+5k >>throwa+912
◧◩
7. coffee+P6[view] [source] [discussion] 2023-11-18 23:48:43
>>apalme+v
They pretty much lost everyone’s confidence if they fire the CEO and then beg him to come back the next day. Did they not foresee any backlash? These people are gonna predict the future and save us from an evil AGI? Lol
◧◩
8. kordle+v9[view] [source] [discussion] 2023-11-18 23:59:56
>>silenc+p2
Fuck safety. We should sprint toward proving AI can kill us before battery life improves, so we can figure out how we’re going to mitigate it when the asshats get hold of it. Kidding, not kidding.
◧◩◪
9. smegge+ed[view] [source] [discussion] 2023-11-19 00:18:36
>>stale2+h3
It seems silly to me but then I always prefered Asimov positronic robots stories to yet another retelling of the Golem of Prague.

The thing is the cultural Ur narrative embed in the collective subconscious doesnt seem to understand its own stories anymore. God and Adam, the Golem of Prague, Frankensteins Monster none of them are really about AI. Its about our children making their own decisions that we disagree with and seeing it as the end of the world.

AI isnt a child though. AI is a tool. It doesn't have its own motives, it doesn't have emotions , it doesn't have any core drives we don't give to it. Those things are products of us being biological evolved beings that need them to survive and pass on our genes and memes to the next generation. AI doesn't have to find shelter food water air oxygen and so on. We provide all the equivalents when there are any as part of building it and turning it on. It doesn't have a drive to mate and pass on it genes it doesn't have any reproducing is a mater of copying some files no evolution involved checksums hashes and error correcting codes see to that. Ai is simply the next step in the tech tree just another tool a powerful useful one but a tool not a rampaging monster

◧◩◪
10. wruza+lh[view] [source] [discussion] 2023-11-19 00:44:42
>>threes+e5
This babysitting of the world gets annoying, tbh. As if everyone to lose their mind and start acting illegal only because chatbot said so. There’s something fundamentally wrong with humanity (which isn’t surprising given the history of our species), if that is unsafe. AI is just a source of information, it doesn’t cancel upbringing and education for human values and methods of dealing with information.
◧◩◪
11. Amezar+5k[view] [source] [discussion] 2023-11-19 01:01:38
>>threes+e5
Yes, in other words, AI is only safe when it repeats only the ideology of AI safetyists as gospel and can be used only to reinforce the power of the status quo.
replies(1): >>chasd0+Lq
◧◩◪◨
12. chasd0+Lq[view] [source] [discussion] 2023-11-19 01:53:40
>>Amezar+5k
Yeah that’s what I thought. This undefined ambiguous use of the word “safety” does real damage to the concept and things that are indeed dangerous and need to be made more safe.
◧◩◪
13. throwa+912[view] [source] [discussion] 2023-11-19 15:36:09
>>threes+e5
That's not really a great encapsulation of the AI safety that those who think AGI poses a thread to humanity are referring to.

The bigger concern is something like Paperclip Maximizer. Alignment is about how to ensure that a super intelligence has the right goals.

[go to top]