zlacker

[parent] [thread] 2 comments
1. trunne+(OP)[view] [source] 2023-11-17 22:38:13
Wait, no, Microsoft said the action was a temporary mistake. From the article you linked:

  In a statement to CNBC, Microsoft said the ChatGPT temporary blockage was a mistake resulting from a test of systems for large language models.

  “We were testing endpoint control systems for LLMs and inadvertently turned them on for all employees,” a spokesperson said. “We restored service shortly after we identified our error. As we have said previously, we encourage employees and customers to use services like Bing Chat Enterprise and ChatGPT Enterprise that come with greater levels of privacy and security protections.”
replies(1): >>nikcub+c1
2. nikcub+c1[view] [source] 2023-11-17 22:43:21
>>trunne+(OP)
That is Microsoft's PR statement to the press in response to a leaked story. They're major investors in OpenAI - it's in their interest to downplay and respond this way.
replies(1): >>egeozc+f4
◧◩
3. egeozc+f4[view] [source] [discussion] 2023-11-17 22:57:06
>>nikcub+c1
Downplaying is one thing, but attributing a policy decision to a fabricated technical error would be outright lying to the public. In a large company like Microsoft, with numerous potential sources of information leaks, this approach is likely unfeasible.
[go to top]