zlacker

[parent] [thread] 6 comments
1. tovej+(OP)[view] [source] 2023-11-20 02:54:12
Easy, you recognize the ability for the generative model to copy facets of data from its corpus as useful but boring in the AI sense, and wonder why people are even talking about sentience and AGI.
replies(3): >>lyu072+a4 >>hotnfr+Id4 >>sensan+Rp5
2. lyu072+a4[view] [source] 2023-11-20 03:28:05
>>tovej+(OP)
I suppose if this widely believed perception is true there can't possibly be any danger in scaling the model up a few thousand times. Except of course all those logical reasoning datasets the model is acing, but I guess it just memorized them all. Its only autocomplete on steroids why are all these people smarter than me worried about it, I don't understand.
replies(1): >>tovej+Aa1
◧◩
3. tovej+Aa1[view] [source] [discussion] 2023-11-20 10:54:24
>>lyu072+a4
There was just a post on this website where GPT4 failed to perform basic reasoning tasks better than minimum paid mechanical turk "microworkers".
replies(1): >>andyba+134
◧◩◪
4. andyba+134[view] [source] [discussion] 2023-11-21 00:48:35
>>tovej+Aa1
And the comments section pointed out multiple flaws in that article.
replies(1): >>tovej+kY4
5. hotnfr+Id4[view] [source] 2023-11-21 01:58:27
>>tovej+(OP)
Yeah stuff this went from “oh shit!” to “meh” in a hurry once I actually started using these things rather than just reading reports about them.
◧◩◪◨
6. tovej+kY4[view] [source] [discussion] 2023-11-21 08:07:24
>>andyba+134
it didn't really. There were no fundamental flaws that I could see.

Perhaps the only salient critique was the textual representation of the problem, but I think it was presented in a way where the model was given all the help it could get.

You forget the result of the paper was actually improving the model's performance and still failing to get anywhere near decent results.

7. sensan+Rp5[view] [source] 2023-11-21 12:01:23
>>tovej+(OP)
I don't believe that we're gonna have Skynet on our hands (at least not within my lifetime).

What I do believe is that as the hype grows for this AI stuff, more and more people are going to be displaced and put out of work for the sake of making some rich assholes even richer. I wasn't a huge fan of "Open"AI as a company, but I sure as fuck would take them over fucking Microsoft, the literal comic-book tier evil megacorporation being at the helm of this mass displacement.

Yet, many of these legitimate concerns are just swatted away by AI sycophants with no actual answers to these issues. You get branded a Luddite (and mind you the Luddites weren't even wrong) and a sensationalist. Shit, you've already got psychopathic C-suites talking about replacing entire teams with current-day AIs, what the fuck are people supposed to do in the future when they get better? What, we're suddenly going to go full-force into a mystical UBI utopia? Overnight?

[go to top]