zlacker

[return to "OpenAI staff threaten to quit unless board resigns"]
1. breadw+17[view] [source] 2023-11-20 14:06:24
>>skille+(OP)
If they join Sam Altman and Greg Brockman at Microsoft they will not need to start from scratch because Microsoft has full rights [1] to ChatGPT IP. They can just fork ChatGPT.

Also keep in mind that Microsoft hasn't actually given OpenAI $13 Billion because much of that is in the form of Azure credits.

So this could end up being the cheapest acquisition for Microsoft: They get a $90 Billion company for peanuts.

[1] https://stratechery.com/2023/openais-misalignment-and-micros...

◧◩
2. himara+Tu[view] [source] 2023-11-20 16:00:46
>>breadw+17
This is wrong. Microsoft has no such rights and its license comes with restrictions, per the cited primary source, meaning a fork would require a very careful approach.

https://www.wsj.com/articles/microsoft-and-openai-forge-awkw...

◧◩◪
3. alasda+hY[view] [source] 2023-11-20 17:57:33
>>himara+Tu
They could make ChatGPT++

https://en.wikipedia.org/wiki/Visual_J%2B%2B

◧◩◪◨
4. prepen+I01[view] [source] 2023-11-20 18:06:16
>>alasda+hY
“Microsoft Chat 365”

Although it would be beautiful if they name it Clippy and finally make Clippy into the all-powerful AGI it was destined to be.

◧◩◪◨⬒
5. kylebe+D21[view] [source] 2023-11-20 18:13:21
>>prepen+I01
At least in this forum can we please stop calling something that is not even close to AGI, AGI. Its just dumb at this point. We are LIGHT-YEARS away from AGI, even calling an LLM "AI" only makes sense for a lay audience. For developers and anyone in the know LLMs are called machine learning.
◧◩◪◨⬒⬓
6. hackin+vf1[view] [source] 2023-11-20 18:57:59
>>kylebe+D21
And how do you know LLMs are not "close" to AGI (close meaning, say, a decade of development that builds on the success of LLMs)?
◧◩◪◨⬒⬓⬔
7. DrSiem+Um1[view] [source] 2023-11-20 19:26:28
>>hackin+vf1
Because LLMs just mimic human communication based on massive amounts of human generated data and have 0 actual intelligence at all.

It could be a first step, sure, but we need many many more breakthroughs to actually get to AGI.

◧◩◪◨⬒⬓⬔⧯
8. astran+ld2[view] [source] 2023-11-20 23:17:32
>>DrSiem+Um1
There is room for intelligence in all three of wherever the original data came from, training on it, and inference on it. So just claiming the third step doesn't have any isn't good enough.

Especially since you have to explain how "just mimicking" works so well.

[go to top]