zlacker

[return to "My AI skeptic friends are all nuts"]
1. ofjcih+21[view] [source] 2025-06-02 21:18:27
>>tablet+(OP)
I feel like we get one of these articles that addresses valid AI criticisms with poor arguments every week and at this point I’m ready to write a boilerplate response because I already know what they’re going to say.

Interns don’t cost 20 bucks a month but training users in the specifics of your org is important.

Knowing what is important or pointless comes with understanding the skill set.

◧◩
2. mounta+S3[view] [source] 2025-06-02 21:33:43
>>ofjcih+21
I feel the opposite, and pretty much every metric we have shows basically linear improvement of these models over time.

The criticisms I hear are almost always gotchas, and when confronted with the benchmarks they either don’t actually know how they are built or don’t want to contribute to them. They just want to complain or seem like a contrarian from what I can tell.

Are LLMs perfect? Absolutely not. Do we have metrics to tell us how good they are? Yes

I’ve found very few critics that actually understand ML on a deep level. For instance Gary Marcus didn’t know what a test train split was. Unfortunately, rage bait like this makes money

◧◩◪
3. Night_+j9[view] [source] 2025-06-02 22:05:31
>>mounta+S3
Models are absolutely not improving linearly. They improve logarithmically with size, and we've already just about hit the limits of compute without becoming totally unreasonable from a space/money/power/etc standpoint.

We can use little tricks here and there to try to make them better, but fundamentally they're about as good as they're ever going to get. And none of their shortcomings are growing pains - they're fundamental to the way an LLM operates.

◧◩◪◨
4. mounta+4l[view] [source] 2025-06-02 23:17:58
>>Night_+j9
Most of the benchmarks are in fact improving linearly, we often don't even know the size. You can find this out but just looking at the scores over time.

And yes, it often is small things that make models better. It always has been, bit by slow they get more powerful, this has been happening since the dawn of machine learning

[go to top]