zlacker

[return to "My AI skeptic friends are all nuts"]
1. grey-a+ba[view] [source] 2025-06-02 22:10:44
>>tablet+(OP)
I’d love to see the authors of effusive praise of generative AI like this provide the proof of the unlimited powers of their tools in code. If GAI (or agents, or whatever comes next …) is so effective it should be quite simple to prove that by creating an AI only company and in short order producing huge amounts of serviceable code to do useful things. So far I’ve seen no sign of this, and the best use case seems to be generating text or artwork which fools humans into thinking it has coherent meaning as our minds love to fill gaps and spot patterns even where there are none. It’s also pretty good at reproducing things it has seen with variations - that can be useful.

So far in my experience watching small to medium sized companies try to use it for real work, it has been occasionally useful for exploring apis, odd bits of knowledge etc, but overall wasted more time than it has saved. I see very few signs of progress.

The time has come for llm users to put up or shut up - if it’s so great, stop telling us and show and use the code it generated on its own.

◧◩
2. ofjcih+cc[view] [source] 2025-06-02 22:23:41
>>grey-a+ba
Honestly it’s really unfortunate that LLMs seem to have picked up the same hype men that attached themselves to blockchains etc.

LLMs are very useful. I use them as a better way to search the web, generate some code that I know I can debug but don’t want to write and as a way to conversationally interact with data.

The problem is the hype machine has set expectations so high and refused criticism to the point where LLMs can’t possibly measure up. This creates the divide we see here.

◧◩◪
3. vohk+Hd[view] [source] 2025-06-02 22:32:43
>>ofjcih+cc
I think I agree with the general thrust but I have to say I've yet to be impressed with LLMs for web search. I think part of that comes from most people using Google as the benchmark, which has been hot garbage for years now. It's not hard to be better than having to dig 3 sponsored results deep to get started parsing the list of SEO spam, let alone the thing you were actually searching for.

But compared to using Kagi, I've found found LLMs end up wasting more of my time by returning a superficial survey with frequent oversights and mistakes. At the final tally I've still found it faster to just do it myself.

I will say I do love LLMs for getting a better idea of what to search for, and for picking details out of larger blocks.

◧◩◪◨
4. jcranm+mg[view] [source] 2025-06-02 22:48:04
>>vohk+Hd
> I think part of that comes from most people using Google as the benchmark, which has been hot garbage for years now.

Honestly, I think part of the decline of Google Search is because it's trying to increase the amount of AI in search.

[go to top]