zlacker

[return to "OpenAI is now everything it promised not to be: closed-source and for-profit"]
1. mellos+pe[view] [source] 2023-03-01 10:46:59
>>isaacf+(OP)
This seems an important article, if for no other reason than it brings the betrayal of its foundational claim still brazenly present in OpenAI's name from the obscurity of HN comments going back years into the public light and the mainstream.

They've achieved marvellous things, OpenAI, but the pivot and long-standing refusal to deal with it honestly leaves an unpleasant taste, and doesn't bode well for the future, especially considering the enormous ethical implications of advantage in the field they are leading.

◧◩
2. adamsm+r61[view] [source] 2023-03-01 16:30:49
>>mellos+pe
>This seems an important article, if for no other reason than it brings the betrayal of its foundational claim still brazenly present in OpenAI's name from the obscurity of HN comments going back years into the public light and the mainstream.

The thing is, it's probably a good thing. The "Open" part of OpenAI was always an almost suicidal bad mistake. The point of starting the company was to try and reduce p(doom) of creating an AGI and it's almost certain that the more people have access to powerful and potentially dangerous tech that p(doom) increases. I think OpenAI is one of the most dangerous organizations on the planet and being more closed reduces that danger slightly.

◧◩◪
3. fragsw+291[view] [source] 2023-03-01 16:39:34
>>adamsm+r61
I'm curious what you think makes them dangerous?
◧◩◪◨
4. therea+ve1[view] [source] 2023-03-01 16:59:20
>>fragsw+291
For long form, I’d suggest cold-takes blog who is very systematic thinker and has been focusing on agi risk recently. https://www.cold-takes.com
◧◩◪◨⬒
5. fragsw+Si1[view] [source] 2023-03-01 17:15:20
>>therea+ve1
I see a lot of "we don't know how it works therefore it could destroy all of us" but that sounds really handwavy to me. I want to see some concrete examples of how it's dangerous.
◧◩◪◨⬒⬓
6. adamsm+l42[view] [source] 2023-03-01 20:36:54
>>fragsw+Si1
Given that the link contains dozens of articles with read times over 10 minutes there is no way you engaged with the problem enough to be able to dismiss it so casually with yourcown hand waving. Ignoring that fact however we can just look at what Bing and ChatGPT have been up to since release.

Basically immediately after release both models were "jailbroken" in ways that allowed them to do undesirable things that OpenAI never intended whether that's giving recipes for how to cook Meth or going on unhinged rants and threatening to kill the humans they are chatting with. In the AI Safety circles you would call these models "unaligned", they are not aligned to human values and do things we don't want them to. HERE is THE problem, as impressive as these models may be I don't think anyone thinks they are really at human levels of intelligence or capability, maybe barely at like mouse level intelligence or something like that. Even at that LOW level of intelligence these models are unpredictably uncontrollable. So we haven't even figured out how to make these "simple" models behave in ways we care about. So now lets project forward to GPT-10 which may be at human level or higher and think about the things it may be able to do, we already know we can't control far simpler models so it goes without saying that this model will likely be even more uncontrollable and since it is much more powerful it is much more dangerous. Another problem is that we don't know how long before we get to a GPT-N that is actually dangerous so we don't know how long we have to make it safe. Most serious people in the field think making Human level AI is a very hard problem but that making a Human level AI that is Safe is another step up in difficulty.

[go to top]