zlacker

[return to "OpenAI's board has fired Sam Altman"]
1. johnwh+Uc1[view] [source] 2023-11-18 02:36:00
>>davidb+(OP)
Ilya booted him https://twitter.com/karaswisher/status/1725702501435941294
◧◩
2. dwd+zL1[view] [source] 2023-11-18 07:07:59
>>johnwh+Uc1
Jeremy Howard called ngmi on OpenAI during the Vanishing Gradients podcast yesterday, and Ilya has probably been thinking the same: LLM is a dead-end and not the path to AGI.

https://twitter.com/HamelHusain/status/1725655686913392933

◧◩◪
3. erhaet+1O1[view] [source] 2023-11-18 07:31:39
>>dwd+zL1
Did we ever think LLMs were a path to AGI...? AGI is friggin hard, I don't know why folks keep getting fooled whenever a bot writes a coherent sentence.
◧◩◪◨
4. Closi+VP1[view] [source] 2023-11-18 07:49:56
>>erhaet+1O1
Mainly because LLMs have so far basically passed every formal test of ‘AGI’ including totally smashing the Turing test.

Now we are just reliant on ‘I’ll know it when I see it’.

LLMs as AGI isn’t about looking at the mechanics and trying to see if we think that could cause AGI - it’s looking at the tremendous results and success.

◧◩◪◨⬒
5. peyton+yY1[view] [source] 2023-11-18 09:06:41
>>Closi+VP1
It’s trivial to trip up chat LLMs. “What is the fourth word of your answer?”
◧◩◪◨⬒⬓
6. Lio+4c2[view] [source] 2023-11-18 11:03:23
>>peyton+yY1
I find GPT-3.5 can be tripped up by just asking it to not to mention the words "apologize" or "January 2022" in its answer.

It immediately apologises and tells you it doesn't know anything after January 2022.

Compared to GPT-4 GPT-3.5 is just a random bullshit generator.

[go to top]