zlacker

[parent] [thread] 3 comments
1. lyu072+(OP)[view] [source] 2023-11-20 03:28:05
I suppose if this widely believed perception is true there can't possibly be any danger in scaling the model up a few thousand times. Except of course all those logical reasoning datasets the model is acing, but I guess it just memorized them all. Its only autocomplete on steroids why are all these people smarter than me worried about it, I don't understand.
replies(1): >>tovej+q61
2. tovej+q61[view] [source] 2023-11-20 10:54:24
>>lyu072+(OP)
There was just a post on this website where GPT4 failed to perform basic reasoning tasks better than minimum paid mechanical turk "microworkers".
replies(1): >>andyba+RY3
◧◩
3. andyba+RY3[view] [source] [discussion] 2023-11-21 00:48:35
>>tovej+q61
And the comments section pointed out multiple flaws in that article.
replies(1): >>tovej+aU4
◧◩◪
4. tovej+aU4[view] [source] [discussion] 2023-11-21 08:07:24
>>andyba+RY3
it didn't really. There were no fundamental flaws that I could see.

Perhaps the only salient critique was the textual representation of the problem, but I think it was presented in a way where the model was given all the help it could get.

You forget the result of the paper was actually improving the model's performance and still failing to get anywhere near decent results.

[go to top]