zlacker

[return to "OpenAI staff threaten to quit unless board resigns"]
1. breadw+17[view] [source] 2023-11-20 14:06:24
>>skille+(OP)
If they join Sam Altman and Greg Brockman at Microsoft they will not need to start from scratch because Microsoft has full rights [1] to ChatGPT IP. They can just fork ChatGPT.

Also keep in mind that Microsoft hasn't actually given OpenAI $13 Billion because much of that is in the form of Azure credits.

So this could end up being the cheapest acquisition for Microsoft: They get a $90 Billion company for peanuts.

[1] https://stratechery.com/2023/openais-misalignment-and-micros...

◧◩
2. himara+Tu[view] [source] 2023-11-20 16:00:46
>>breadw+17
This is wrong. Microsoft has no such rights and its license comes with restrictions, per the cited primary source, meaning a fork would require a very careful approach.

https://www.wsj.com/articles/microsoft-and-openai-forge-awkw...

◧◩◪
3. alasda+hY[view] [source] 2023-11-20 17:57:33
>>himara+Tu
They could make ChatGPT++

https://en.wikipedia.org/wiki/Visual_J%2B%2B

◧◩◪◨
4. prepen+I01[view] [source] 2023-11-20 18:06:16
>>alasda+hY
“Microsoft Chat 365”

Although it would be beautiful if they name it Clippy and finally make Clippy into the all-powerful AGI it was destined to be.

◧◩◪◨⬒
5. kylebe+D21[view] [source] 2023-11-20 18:13:21
>>prepen+I01
At least in this forum can we please stop calling something that is not even close to AGI, AGI. Its just dumb at this point. We are LIGHT-YEARS away from AGI, even calling an LLM "AI" only makes sense for a lay audience. For developers and anyone in the know LLMs are called machine learning.
◧◩◪◨⬒⬓
6. hackin+vf1[view] [source] 2023-11-20 18:57:59
>>kylebe+D21
And how do you know LLMs are not "close" to AGI (close meaning, say, a decade of development that builds on the success of LLMs)?
◧◩◪◨⬒⬓⬔
7. DrSiem+Um1[view] [source] 2023-11-20 19:26:28
>>hackin+vf1
Because LLMs just mimic human communication based on massive amounts of human generated data and have 0 actual intelligence at all.

It could be a first step, sure, but we need many many more breakthroughs to actually get to AGI.

◧◩◪◨⬒⬓⬔⧯
8. hackin+6u1[view] [source] 2023-11-20 19:54:04
>>DrSiem+Um1
Mimicking human communication may or may not be relevant to AGI, depending on how its cashed out. Why think LLMs haven't captured a significant portion of how humans think and speak, i.e. the computational structure of thought, thus represent a significant step towards AGI?
◧◩◪◨⬒⬓⬔⧯▣
9. Freeby+oL2[view] [source] 2023-11-21 03:02:55
>>hackin+6u1
As you illustrate, too many naysayers think that AGI must replicate "human thought". People, even those here, seem to equate AGI to being synonymous to human intelligence, but that type of thinking is flawed. AGI will not think like a human whatsoever. It must simply be indistinguishable from the capabilities of a human across almost all domains where a human is dominant. We may be close, or we may be far away. We simply do not know. If an LLM, regardless of the mechanism of action or how 'stupid' it may be, was able to accomplish all of the requirements of an AGI, then it is an AGI. Simple as that.

I imagine us actually reaching AGI, and people will start saying, "Yes, but it is not real AGI because..." This should be a measure of capabilities not process. But if expectations of its capabilities are clear, then we will get there eventually -- if we allow it to happen and do not continue moving the goalposts.

[go to top]