zlacker

[parent] [thread] 7 comments
1. rrrrrr+(OP)[view] [source] 2023-11-20 08:08:49
Presumably Sam and Greg now get to pick up where they left off and keep productizing GPT-4 since Microsoft has the IP and is hosting their own GPT-4 models on Azure, right?

The more interesting thing is whether or not they'll be able to build and release something equivalent to GPT-5, using Microsoft's immense resources, before OpenAI is able to.

replies(1): >>famous+x1
2. famous+x1[view] [source] 2023-11-20 08:15:13
>>rrrrrr+(OP)
GPT-5 is almost certainly already done. But considering they sat on 4 for 8 months with Altman as head, who knows if it'll see the light of day.
replies(2): >>rrrrrr+63 >>mighmi+R4
◧◩
3. rrrrrr+63[view] [source] [discussion] 2023-11-20 08:21:30
>>famous+x1
Do you think Microsoft gets access to it?
replies(1): >>famous+y4
◧◩◪
4. famous+y4[view] [source] [discussion] 2023-11-20 08:29:02
>>rrrrrr+63
They will unless the board declares it AGI. I'm not joking lol. That was part of the agreement.
replies(1): >>user_n+q7
◧◩
5. mighmi+R4[view] [source] [discussion] 2023-11-20 08:30:23
>>famous+x1
5 months ago they said they hadn't started training: https://techcrunch.com/2023/06/07/openai-gpt5-sam-altman/ and had no intention to do so within the next 6 months: https://the-decoder.com/gpt-5-is-nowhere-close-says-openai-c...

They just started development in the last week or so: https://decrypt.co/206044/gpt-5-openai-development-roadmap-g...

replies(1): >>Chamix+68
◧◩◪◨
6. user_n+q7[view] [source] [discussion] 2023-11-20 08:41:05
>>famous+y4
Have fun defining and proving AGI lol
replies(1): >>famous+28
◧◩◪◨⬒
7. famous+28[view] [source] [discussion] 2023-11-20 08:43:39
>>user_n+q7
They already defined it - a highly autonomous system that outperforms humans at most economically valuable work.

There's some vagueness here sure but if they can demonstrate something to that effect, fair play to them i guess.

◧◩◪
8. Chamix+68[view] [source] [discussion] 2023-11-20 08:43:50
>>mighmi+R4
The little secret is that the training run (meaning, creating the raw autocompleting multimodal token weights) for 5 ran in parallel with 4.
[go to top]