zlacker

[return to "OpenAI's board has fired Sam Altman"]
1. johnwh+Uc1[view] [source] 2023-11-18 02:36:00
>>davidb+(OP)
Ilya booted him https://twitter.com/karaswisher/status/1725702501435941294
◧◩
2. dwd+zL1[view] [source] 2023-11-18 07:07:59
>>johnwh+Uc1
Jeremy Howard called ngmi on OpenAI during the Vanishing Gradients podcast yesterday, and Ilya has probably been thinking the same: LLM is a dead-end and not the path to AGI.

https://twitter.com/HamelHusain/status/1725655686913392933

◧◩◪
3. erhaet+1O1[view] [source] 2023-11-18 07:31:39
>>dwd+zL1
Did we ever think LLMs were a path to AGI...? AGI is friggin hard, I don't know why folks keep getting fooled whenever a bot writes a coherent sentence.
◧◩◪◨
4. Closi+VP1[view] [source] 2023-11-18 07:49:56
>>erhaet+1O1
Mainly because LLMs have so far basically passed every formal test of ‘AGI’ including totally smashing the Turing test.

Now we are just reliant on ‘I’ll know it when I see it’.

LLMs as AGI isn’t about looking at the mechanics and trying to see if we think that could cause AGI - it’s looking at the tremendous results and success.

◧◩◪◨⬒
5. peyton+yY1[view] [source] 2023-11-18 09:06:41
>>Closi+VP1
It’s trivial to trip up chat LLMs. “What is the fourth word of your answer?”
◧◩◪◨⬒⬓
6. concor+Y42[view] [source] 2023-11-18 10:04:39
>>peyton+yY1
How well does that work on humans?
◧◩◪◨⬒⬓⬔
7. Loughl+ko2[view] [source] 2023-11-18 12:31:01
>>concor+Y42
The fourth word of my answer is "of".

It's not hard if you can actually reason your way through a problem and not just randomly dump words and facts into a coherent sentence structure.

◧◩◪◨⬒⬓⬔⧯
8. concor+eG2[view] [source] 2023-11-18 14:22:51
>>Loughl+ko2
I reckon an LLM with a second pass correction loop would manage it. (By that I mean that after every response it is instructed to, given the its previous response, produce a second better response, roughly analogous to a human that thinks before it speaks)

LLMs are not AIs, but they could be a core component for one.

◧◩◪◨⬒⬓⬔⧯▣
9. haanji+hw3[view] [source] 2023-11-18 19:06:25
>>concor+eG2
The following are a part of my "custom instructions" to chatGPT -

"Please include a timestamp with current date and time at the end of each response.

After generating each answer, check it for internal consistency and accuracy. Revise your answer if it is inconsistent or inaccurate, and do this repeatedly till you have an accurate and consistent answer."

It manages to follow them very inconsistently, but it has gone into something approaching an infinite loop (for infinity ~= 10) on a few occasions - rechecking the last timestamp against current time, finding a mismatch, generating a new timestamp, and so on until (I think) it finally exits the loop by failing to follow instructions.

[go to top]