zlacker

[return to "OpenAI staff threaten to quit unless board resigns"]
1. breadw+17[view] [source] 2023-11-20 14:06:24
>>skille+(OP)
If they join Sam Altman and Greg Brockman at Microsoft they will not need to start from scratch because Microsoft has full rights [1] to ChatGPT IP. They can just fork ChatGPT.

Also keep in mind that Microsoft hasn't actually given OpenAI $13 Billion because much of that is in the form of Azure credits.

So this could end up being the cheapest acquisition for Microsoft: They get a $90 Billion company for peanuts.

[1] https://stratechery.com/2023/openais-misalignment-and-micros...

◧◩
2. himara+Tu[view] [source] 2023-11-20 16:00:46
>>breadw+17
This is wrong. Microsoft has no such rights and its license comes with restrictions, per the cited primary source, meaning a fork would require a very careful approach.

https://www.wsj.com/articles/microsoft-and-openai-forge-awkw...

◧◩◪
3. alasda+hY[view] [source] 2023-11-20 17:57:33
>>himara+Tu
They could make ChatGPT++

https://en.wikipedia.org/wiki/Visual_J%2B%2B

◧◩◪◨
4. prepen+I01[view] [source] 2023-11-20 18:06:16
>>alasda+hY
“Microsoft Chat 365”

Although it would be beautiful if they name it Clippy and finally make Clippy into the all-powerful AGI it was destined to be.

◧◩◪◨⬒
5. kylebe+D21[view] [source] 2023-11-20 18:13:21
>>prepen+I01
At least in this forum can we please stop calling something that is not even close to AGI, AGI. Its just dumb at this point. We are LIGHT-YEARS away from AGI, even calling an LLM "AI" only makes sense for a lay audience. For developers and anyone in the know LLMs are called machine learning.
◧◩◪◨⬒⬓
6. hackin+vf1[view] [source] 2023-11-20 18:57:59
>>kylebe+D21
And how do you know LLMs are not "close" to AGI (close meaning, say, a decade of development that builds on the success of LLMs)?
◧◩◪◨⬒⬓⬔
7. DrSiem+Um1[view] [source] 2023-11-20 19:26:28
>>hackin+vf1
Because LLMs just mimic human communication based on massive amounts of human generated data and have 0 actual intelligence at all.

It could be a first step, sure, but we need many many more breakthroughs to actually get to AGI.

◧◩◪◨⬒⬓⬔⧯
8. tempes+Gt1[view] [source] 2023-11-20 19:52:53
>>DrSiem+Um1
One might argue that humans do a similar thing. And that the structure that allows the LLM to realistically "mimic" human communication is its intelligence.
◧◩◪◨⬒⬓⬔⧯▣
9. westur+RL1[view] [source] 2023-11-20 21:00:55
>>tempes+Gt1
Q: Is this a valid argument? "The structure that allows the LLM to realistically 'mimic' human communication is its intelligence. https://g.co/bard/share/a8c674cfa5f4 :

> [...]

> Premise 1: LLMs can realistically "mimic" human communication.

> Premise 2: LLMs are trained on massive amounts of text data.

> Conclusion: The structure that allows LLMs to realistically "mimic" human communication is its intelligence.

"If P then Q" is the Material conditional: https://en.wikipedia.org/wiki/Material_conditional

Does it do logical reasoning or inference before presenting text to the user?

That's a lot of waste heat.

(Edit) with next word prediction just is it,

"LLMs cannot find reasoning errors, but can correct them" >>38353285

"Misalignment and Deception by an autonomous stock trading LLM agent" https://news.ycombinator.com/item?id=38353880#38354486

[go to top]