zlacker

[return to "OpenAI's board has fired Sam Altman"]
1. nikcub+Gj[view] [source] 2023-11-17 21:40:38
>>davidb+(OP)
Put the pieces together:

Nov 6 - OpenAI devday, with new features of build-your-own ChatGPT and more

Nov 9 - Microsoft cuts employees off from ChatGPT due to "security concerns" [0]

Nov 9 - OpenAI experiences severe downtime the company attributes to a "DDoS" (not the correct term for 'excess usage') [3]

Nov 15 - OpenAI announce no new ChatGPT plus upgrades [1] but still allow regular signups (and still do)

Nov 17 - OpenAI fire Altman

Put the threads together - one theory: the new release had a serious security issue, leaked a bunch of data, and it wasn't disclosed, but Microsoft knew about it.

This wouldn't be the first time - in March there was an incident where users were seeing the private chats of other users [2]

Further extending theory - prioritizing getting to market overrode security/privacy testing, and this most recent release caused something much, much larger.

Further: CTO Mira / others internally concerned about launch etc. but overruled by CEO. Kicks issue up to board, hence their trust in her taking over as interim CEO.

edit: added note on DDoS (thanks kristjansson below) - and despite the downtime it was only upgrades to ChatGPT Plus with the new features that were disabled. Note on why CTO would take over.

[0] https://www.cnbc.com/2023/11/09/microsoft-restricts-employee...

[1] https://twitter.com/sama/status/1724626002595471740

[2] https://www.theverge.com/2023/3/21/23649806/chatgpt-chat-his...

[3] https://techcrunch.com/2023/11/09/openai-blames-ddos-attack-...

◧◩
2. trunne+Xu[view] [source] 2023-11-17 22:38:13
>>nikcub+Gj
Wait, no, Microsoft said the action was a temporary mistake. From the article you linked:

  In a statement to CNBC, Microsoft said the ChatGPT temporary blockage was a mistake resulting from a test of systems for large language models.

  “We were testing endpoint control systems for LLMs and inadvertently turned them on for all employees,” a spokesperson said. “We restored service shortly after we identified our error. As we have said previously, we encourage employees and customers to use services like Bing Chat Enterprise and ChatGPT Enterprise that come with greater levels of privacy and security protections.”
[go to top]