zlacker

[return to "OpenAI's board has fired Sam Altman"]
1. johnwh+Uc1[view] [source] 2023-11-18 02:36:00
>>davidb+(OP)
Ilya booted him https://twitter.com/karaswisher/status/1725702501435941294
◧◩
2. dwd+zL1[view] [source] 2023-11-18 07:07:59
>>johnwh+Uc1
Jeremy Howard called ngmi on OpenAI during the Vanishing Gradients podcast yesterday, and Ilya has probably been thinking the same: LLM is a dead-end and not the path to AGI.

https://twitter.com/HamelHusain/status/1725655686913392933

◧◩◪
3. erhaet+1O1[view] [source] 2023-11-18 07:31:39
>>dwd+zL1
Did we ever think LLMs were a path to AGI...? AGI is friggin hard, I don't know why folks keep getting fooled whenever a bot writes a coherent sentence.
◧◩◪◨
4. Closi+VP1[view] [source] 2023-11-18 07:49:56
>>erhaet+1O1
Mainly because LLMs have so far basically passed every formal test of ‘AGI’ including totally smashing the Turing test.

Now we are just reliant on ‘I’ll know it when I see it’.

LLMs as AGI isn’t about looking at the mechanics and trying to see if we think that could cause AGI - it’s looking at the tremendous results and success.

◧◩◪◨⬒
5. peyton+yY1[view] [source] 2023-11-18 09:06:41
>>Closi+VP1
It’s trivial to trip up chat LLMs. “What is the fourth word of your answer?”
◧◩◪◨⬒⬓
6. concor+Y42[view] [source] 2023-11-18 10:04:39
>>peyton+yY1
How well does that work on humans?
◧◩◪◨⬒⬓⬔
7. Loughl+ko2[view] [source] 2023-11-18 12:31:01
>>concor+Y42
The fourth word of my answer is "of".

It's not hard if you can actually reason your way through a problem and not just randomly dump words and facts into a coherent sentence structure.

◧◩◪◨⬒⬓⬔⧯
8. concor+eG2[view] [source] 2023-11-18 14:22:51
>>Loughl+ko2
I reckon an LLM with a second pass correction loop would manage it. (By that I mean that after every response it is instructed to, given the its previous response, produce a second better response, roughly analogous to a human that thinks before it speaks)

LLMs are not AIs, but they could be a core component for one.

◧◩◪◨⬒⬓⬔⧯▣
9. howrar+MW2[view] [source] 2023-11-18 16:01:38
>>concor+eG2
Every token is already being generated with all previously generated tokens as inputs. There's nothing about the architecture that makes this hard. It just hasn't been trained on this kind of task.
◧◩◪◨⬒⬓⬔⧯▣▦
10. peyton+pz4[view] [source] 2023-11-19 01:14:24
>>howrar+MW2
Really? I don’t know of a positional encoding scheme that’ll handle this.
[go to top]