zlacker

[parent] [thread] 9 comments
1. the_mi+(OP)[view] [source] 2024-05-15 14:27:56
And entirely predictable from the first one: https://openai.com/index/introducing-superalignment/
replies(2): >>uLogMi+x >>btown+13
2. uLogMi+x[view] [source] 2024-05-15 14:30:12
>>the_mi+(OP)
Imagine trying to keep something so far above us in intelligence, caged. Scary stuff...
replies(1): >>binary+Pf
3. btown+13[view] [source] 2024-05-15 14:42:51
>>the_mi+(OP)
Makes me wonder if that 20% compute commitment to superalignment research was walked back (or redesigned so as to be distant from the original mission). Or, perhaps the two deemed that even more commitment was necessary, and were dissatisfied with Altman's response.

Either way, if it's enough to cause them both to think it's better to research outside of the opportunities and access to data that OpenAI provides, I don't see a scenario where this doesn't indicate a significant shift in OpenAI's commitment to superalignment research and safety. One hopes that, at the very least, Microsoft's interests in brand integrity incentivize at least some modicum of continued commitment to safety research.

replies(1): >>dontup+B4
◧◩
4. dontup+B4[view] [source] [discussion] 2024-05-15 14:50:06
>>btown+13
Ironically Microsoft is the one that's notoriously terrible at checking their "AI" products before releasing them.

Besides the infamous Tay there was that apparently un-aligned Wizard-2[or something like that] model from them which got released by mistake for about 12 hours

replies(3): >>buildb+qg >>fzzzy+IJ >>jerjer+SH1
◧◩
5. binary+Pf[view] [source] [discussion] 2024-05-15 15:41:16
>>uLogMi+x
I’m genuinely curious - do you actually believe that GPT is a super intelligence? Because I have the opposite experience. It consistently fails to be correct on following even the most basic instructions. For a little while I thought maybe I’m doing it wrong, and I need better prompts, but then I realized that its zero shot and few shot capabilities are really hit and miss. Furthermore, a superior intelligence shouldn’t need us to conform to its persnickety requirements, and it should be able to adapt far better than it actually does.
replies(1): >>uLogMi+Rx
◧◩◪
6. buildb+qg[view] [source] [discussion] 2024-05-15 15:43:58
>>dontup+B4
As an MS employee working on LLMs, that entire saga is super weird. We need approval for everything! Releasing anything without approval is quite weird.

We can’t just drop papers on arxiv. There is no way running your own twitter, github, etc as a separate group allowed.

I checked fairly recently to see if the model was actually released again, it doesn’t seem to be; I find this telling.

◧◩◪
7. uLogMi+Rx[view] [source] [discussion] 2024-05-15 17:00:21
>>binary+Pf
GPT does not need super-alignment. This refers to aligning artificial general and super intelligence.
◧◩◪
8. fzzzy+IJ[view] [source] [discussion] 2024-05-15 17:57:48
>>dontup+B4
I was able to download a copy of that before they took it down. Silly.
replies(1): >>dontup+BU2
◧◩◪
9. jerjer+SH1[view] [source] [discussion] 2024-05-16 00:30:36
>>dontup+B4
Sydney was their best "lest just release it without guardrails" bot.

Tay way trivially racist, but boy was Sydney a wacko.

◧◩◪◨
10. dontup+BU2[view] [source] [discussion] 2024-05-16 14:03:44
>>fzzzy+IJ
Yeah it was already mirrored pretty quickly. I expect enough people are now running cronjobs to archive whitelists of HF pages and auto-cloning anything that gets pushed out.
[go to top]