zlacker

[return to "OpenAI's board has fired Sam Altman"]
1. nikcub+Gj[view] [source] 2023-11-17 21:40:38
>>davidb+(OP)
Put the pieces together:

Nov 6 - OpenAI devday, with new features of build-your-own ChatGPT and more

Nov 9 - Microsoft cuts employees off from ChatGPT due to "security concerns" [0]

Nov 9 - OpenAI experiences severe downtime the company attributes to a "DDoS" (not the correct term for 'excess usage') [3]

Nov 15 - OpenAI announce no new ChatGPT plus upgrades [1] but still allow regular signups (and still do)

Nov 17 - OpenAI fire Altman

Put the threads together - one theory: the new release had a serious security issue, leaked a bunch of data, and it wasn't disclosed, but Microsoft knew about it.

This wouldn't be the first time - in March there was an incident where users were seeing the private chats of other users [2]

Further extending theory - prioritizing getting to market overrode security/privacy testing, and this most recent release caused something much, much larger.

Further: CTO Mira / others internally concerned about launch etc. but overruled by CEO. Kicks issue up to board, hence their trust in her taking over as interim CEO.

edit: added note on DDoS (thanks kristjansson below) - and despite the downtime it was only upgrades to ChatGPT Plus with the new features that were disabled. Note on why CTO would take over.

[0] https://www.cnbc.com/2023/11/09/microsoft-restricts-employee...

[1] https://twitter.com/sama/status/1724626002595471740

[2] https://www.theverge.com/2023/3/21/23649806/chatgpt-chat-his...

[3] https://techcrunch.com/2023/11/09/openai-blames-ddos-attack-...

◧◩
2. elfly+pL[view] [source] 2023-11-17 23:58:06
>>nikcub+Gj
If there is an incident where people can see other's people chats there are two possibilities:

-It's a server issue, meaning someone fucked up their javascript and cached a session key or something. It's a minor thing; could get the specific dev fired in the worst case, and it is embarrassing, but it is solvable.

-it's inherent to how the AI works, and thus it is impossible to share a ChatGPT server with someone else without sooner or later leaking knowledge. It would mean the company cannot scale at all cause they'd need to provide each client their own separate server instance.

If this was something Sam knew and kept it from the board, that'd be fireable. And it'd be catastrophic, cause it'd mean no useable product until a solution is found.

I'd somehow doubt it is something like this, but if we see security issues and private chats that keep leaking, it is a possibility.

◧◩◪
3. vvndom+v02[view] [source] 2023-11-18 09:23:09
>>elfly+pL
It's inherent to how it works, it is known and had always been known that nothing you type into these chats is private and there is nothing whatsoever fundamentally to stop the AI from just handing your chats to somebody else or dumping them out to the internet. They aren't even able to theoretically describe a mechanism by which you could provide a kind of memory protection for these models. And of course we have seen real examples of this already. Only a matter of time before the completely and totally insurmountable problems or scaling AI become clear. Sam is and has always been a conman in my view.
[go to top]