zlacker

[return to "My AI skeptic friends are all nuts"]
1. ofjcih+21[view] [source] 2025-06-02 21:18:27
>>tablet+(OP)
I feel like we get one of these articles that addresses valid AI criticisms with poor arguments every week and at this point I’m ready to write a boilerplate response because I already know what they’re going to say.

Interns don’t cost 20 bucks a month but training users in the specifics of your org is important.

Knowing what is important or pointless comes with understanding the skill set.

◧◩
2. mounta+S3[view] [source] 2025-06-02 21:33:43
>>ofjcih+21
I feel the opposite, and pretty much every metric we have shows basically linear improvement of these models over time.

The criticisms I hear are almost always gotchas, and when confronted with the benchmarks they either don’t actually know how they are built or don’t want to contribute to them. They just want to complain or seem like a contrarian from what I can tell.

Are LLMs perfect? Absolutely not. Do we have metrics to tell us how good they are? Yes

I’ve found very few critics that actually understand ML on a deep level. For instance Gary Marcus didn’t know what a test train split was. Unfortunately, rage bait like this makes money

◧◩◪
3. Night_+j9[view] [source] 2025-06-02 22:05:31
>>mounta+S3
Models are absolutely not improving linearly. They improve logarithmically with size, and we've already just about hit the limits of compute without becoming totally unreasonable from a space/money/power/etc standpoint.

We can use little tricks here and there to try to make them better, but fundamentally they're about as good as they're ever going to get. And none of their shortcomings are growing pains - they're fundamental to the way an LLM operates.

◧◩◪◨
4. _dain_+Yi[view] [source] 2025-06-02 23:04:28
>>Night_+j9
remember in 2022 when we "hit a wall"? everyone said that back then. turned out we didn't.

and in 2023 and 2024 and january 2025 and ...

all those "walls" collapsed like paper. they were phantoms; ppl literally thinking the gaps between releases were permanent flatlines.

money obviously isn't an issue here, VCs are pouring in billions upon billions. they're building whole new data centres and whole fucking power plants for these things; electricity and compute aren't limits. neither is data, since increasingly the models get better through self-play.

>fundamentally they're about as good as they're ever going to get

one trillion percent cope and denial

◧◩◪◨⬒
5. yahooz+qs1[view] [source] 2025-06-03 11:10:23
>>_dain_+Yi
Yet we are still at the “treat it like a junior” level
[go to top]