zlacker

[parent] [thread] 19 comments
1. wpietr+(OP)[view] [source] 2025-06-03 23:51:54
>if respectable people with no stake in selling AI like @tptacek or @kentonv in the other AI thread are saying similar things, you should probably take a closer look.

Maybe? Social proof doesn't mean much to me during a hype cycle. You could say the same thing about tulip bulbs or any other famous bubble. Lots of smart people with no stake get sucked in. People are extremely good at fooling themselves. There are a lot of extremely smart people following all of the world's major religions, for example, and they can't all be right. And whatever else is going on here, there are a lot of very talented people whose fortunes and futures depend on convincing everybody that something extraordinary is happening here.

I'm glad you have found something that works for you. But I talk with a lot of people who are totally convinced they've found something that makes a huge difference, from essential oils to functional programming. Maybe it does for them. But personally, what works for me is waiting out the hype cycle until we get to the plateau of productivity. Those months that you spent figuring out what worked are months I'd rather spend on using what I've already found to work.

replies(3): >>tptace+x1 >>kenton+Z2 >>antifa+kg1
2. tptace+x1[view] [source] 2025-06-04 00:09:55
>>wpietr+(OP)
The problem with this argument is that if I'm right, the hype cycle will continue for a long time before it settles (because this is a particularly big problem to have made a dent in), and for that entire span of time skepticism will have been the wrong position.
replies(2): >>mplanc+fc1 >>wpietr+D13
3. kenton+Z2[view] [source] 2025-06-04 00:29:46
>>wpietr+(OP)
Dude. Claude Code has zero learning curve. You just open the terminal app in your code directory and you tell it what you want, in English. In the time you have spent writing these comments about how you don't care to try it now because it's probably just hype, you could have actually tried it and found out if it's just hype.
replies(2): >>lolind+A81 >>wpietr+Z13
◧◩
4. lolind+A81[view] [source] [discussion] 2025-06-04 13:07:41
>>kenton+Z2
I've tried Claude Code repeatedly and haven't figured out how to make it work for me on my work code base. It regularly gets lost, spins out of control, and spends a bunch of tokens without solving anything. I totally sympathize with people who find Claude Code to have a learning curve, and I'm writing this while waiting for Cursor to finish a task I gave it, so it's not like I'm unfamiliar with the tooling in general.

One big problem with Claude Code vs Cursor is that you have to pay for the cost of getting over the learning curve. With Cursor I could eat the subscription fee and then goof off for a long time trying to figure out how to prompt it well. With Claude Code a bad prompt can easily cost me $5 a pop, which (irrationally, but measurably) hurts more than the one-time monthly fee for Cursor.

replies(1): >>kenton+Hj1
◧◩
5. mplanc+fc1[view] [source] [discussion] 2025-06-04 13:36:14
>>tptace+x1
So? The better these tools get, the easier they will be to get value out of. It seems not unwise to let them stabilize before investing the effort and getting the value out, especially if you’re working in one of the areas/languages where they’re still not as useful.

Learning how to use a tool once is easy, relearning how to use a tool every six months because of the rapid pace of change is a pain.

replies(1): >>tptace+Mv1
6. antifa+kg1[view] [source] 2025-06-04 14:00:40
>>wpietr+(OP)
> You could say the same thing about tulip bulbs or any other famous bubble. Lots of smart people with no stake get sucked in.

While I agree with the skepticism, what specifically is the stake here? Most code assists have usable plans in the $10-$20 range. The investors are apparently taking a much bigger risk than the consumer would be in a case like this.

Aside from the horror stories about people spending $100 in one day of API tokens for at best meh results, of course.

replies(2): >>wpietr+O13 >>b3mora+1kb
◧◩◪
7. kenton+Hj1[view] [source] [discussion] 2025-06-04 14:24:59
>>lolind+A81
Claude Code actually has a flat-rate subscription option now, if you prefer that. Personally I've found the API cost to be pretty negligible, but maybe I'm out of touch. (I mean, it's one AI-generated commit, Michael. What could it cost, $5?)

Anyway, if you've tried it and it doesn't work for you, fair enough. I'm not going to tell you you're wrong. I'm just bothered by all the people who are out here posting about AI being bad while refusing to actually try it. (To be fair, I was one of them, six months ago...)

◧◩◪
8. tptace+Mv1[view] [source] [discussion] 2025-06-04 15:28:38
>>mplanc+fc1
This isn't responsive to what I wrote. Letting the tools stabilize is one thing, makes perfect sense. "Waiting until the hype cycle dies" is another.
replies(1): >>mplanc+6N1
◧◩◪◨
9. mplanc+6N1[view] [source] [discussion] 2025-06-04 17:00:38
>>tptace+Mv1
I suspect the hype cycle and the stabilization curves are relatively in-sync. While the tools are constantly changing, there's always a fresh source of hype, and a fresh variant of "oh you're just not using the right/newest/best model/agent/etc." from those on the hype train.
replies(1): >>tptace+CS1
◧◩◪◨⬒
10. tptace+CS1[view] [source] [discussion] 2025-06-04 17:26:30
>>mplanc+6N1
This is the thing. I do not agree with that, at all. We can just disagree, and that's fine, but let's be clear about what we're disagreeing about, because the whole goddam point of this piece is that nobody in this "debate" is saying the same thing. I think the hype is going to scale out practically indefinitely, because this stuff actually works spookily well. The hype will remain irrational longer than you can remain solvent.
replies(1): >>mplanc+t52
◧◩◪◨⬒⬓
11. mplanc+t52[view] [source] [discussion] 2025-06-04 18:38:20
>>tptace+CS1
Well, generally, that’s just not how hype works.

A thing being great doesn’t mean it’s going to generate outsized levels of hype forever. Nobody gets hyped about “The Internet” anymore, because novel use cases aren’t being discovered at a rapid clip, and it has well and throughly integrated into the general milieu of society. Same with GPS, vaccines, docker containers, Rust, etc., but I mentioned the Internet first since it’s probably on a similar level of societal shift as is AI in the maximalist version of AI hype.

Once a thing becomes widespread and standardized, it becomes just another part of the world we live in, regardless of how incredible it is. It’s only exciting to be a hype man when you’ve got the weight of broad non-adoption to rail against.

Which brings me to the point I was originally trying to make, with a more well-defined set of terms: who cares if someone waits until the tooling is more widely adopted, easy to use, and somewhat standardized prior to jumping on the bandwagon? Not everyone needs to undergo the pain of being an early adopter, and if the tools become as good as everyone says they will, they will succeed on their merits, and not due to strident hype pieces.

I think some of the frustration the AI camp is dealing with right now is because y’all are the new Rust Evangelism Strike Force, just instead of “you’re a bad software engineer if you use a memory unsafe languages,” it’s “you’re a bad software engineer if you don’t use AI.”

replies(2): >>tptace+H72 >>scott_+In2
◧◩◪◨⬒⬓⬔
12. tptace+H72[view] [source] [discussion] 2025-06-04 18:51:17
>>mplanc+t52
Who you calling y'all? I'm a developer who was skeptical about AI until about 6 months ago, and then used it, and am now here to say "this shit works". That's all. I write Go, not Rust.

People have all these feelings about AI hype, and they just have nothing at all to do with what I'm saying. How well the tools work have not much at all to do with the hype level. Usually when someone says that, they mean "the tools don't really work". Not this time.

◧◩◪◨⬒⬓⬔
13. scott_+In2[view] [source] [discussion] 2025-06-04 20:30:05
>>mplanc+t52
The tools are at the point now that ignoring them is akin to ignoring Stack Overflow posts. Basically any time you'd google for the answer to something, you might as well ask an AI assistant. It has a good chance of giving you a good answer. And given how programming works, it's usually easy to verify the information. Just like, say, you would do with a Stack Overflow post.
◧◩
14. wpietr+D13[view] [source] [discussion] 2025-06-05 02:00:29
>>tptace+x1
I think it depends a lot on what you think "wrong position" means. I think skepticism only really goes wrong when it refuses to see the truth in what it's questioning long past the point where it's reasonable. I don't think we're there yet. For example, questions like "What is the long term effect on a code base" take us seeing the long term. Or there are legitimate questions about the ROI of learning and re-learn rapidly changing tools. What's worth it to you may not be in other situations.

I also think hype cycles and actual progress can have a variety of relationships. After Bubble 1.0 burst, there were years of exciting progress without a lot of hype. Maybe we'll get something similar here, as reasonable observers are already seeing the hype cycle falter. E.g.: https://www.economist.com/business/2025/05/21/welcome-to-the...

And of course, it all hinges on you being right. Which I get you are convinced of, but if you want to be thorough, you have to look at the other side of it.

replies(1): >>tptace+m43
◧◩
15. wpietr+O13[view] [source] [discussion] 2025-06-05 02:03:16
>>antifa+kg1
The stake they and I were referring to is a financial interest in the success of AI. Related is the reputational impact, of course. A lot of people who may not make money do like being seen as smart and cutting edge.

But even if we look at your notion of stake, you're missing huge chunks of it. Code bases are extremely expensive assets, and programmers are extremely expensive resources. $10 a month is nothing compared to the costs of a major cleanup or rewrite.

◧◩
16. wpietr+Z13[view] [source] [discussion] 2025-06-05 02:04:48
>>kenton+Z2
I could not have, because my standards involve more than a five minute impression from a tool designed to wow people in the first five minutes. Dude.
replies(1): >>kenton+F54
◧◩◪
17. tptace+m43[view] [source] [discussion] 2025-06-05 02:26:30
>>wpietr+D13
Well, two things. First, I spent a long time being wrong about this; I definitely looked at the other side. Second, the thing I'm convinced of is kind of objective? Like: these things build working code that clears quality thresholds.

But none of that really matters; I'm not so much engaging on the question of whether you are sold on LLM coding (come over next weekend though for the grilling thing we're doing and make your case then!). The only thing I'm engaging on here is the distinction between the hype cycle, which is bad and will get worse over the coming years, and the utility of the tools.

replies(1): >>wpietr+qW3
◧◩◪◨
18. wpietr+qW3[view] [source] [discussion] 2025-06-05 12:46:20
>>tptace+m43
Thanks! If I can make it I will. (The pinball museum project is sucking up a lot of my time as we get toward launch. You should come by!)

I think that is one interesting question that I'll want to answer before adoption on my projects, but it definitely isn't the only one.

And maybe the hype cycle will get worse and maybe it won't. Like The Economist, I'm starting to see a turn. The amount of money going into LLMs generally is unsustainable, and I think OpenAI's recent raise is a good example: round 11, $40 billion dollar goal, which they're taking in tranches. Already the largest funding round in history, and it's not the last one they'll need before they're in the black. I could easily see a trough of disillusionment coming in the next 18 months. I agree programming tools could well have a lot of innovation over the next few years, but if that happens against a backdrop of "AI" disillusionment, it'll be a lot easier to see what they're actually delivering.

◧◩◪
19. kenton+F54[view] [source] [discussion] 2025-06-05 13:51:38
>>wpietr+Z13
I think you're rationalizing your resistance to change. I've been there!

I have no reason to care whether you use AI or not. I'm giving you this advice just for your sake: Consider whether you are taking a big career risk by avoiding learning about the latest tools of your profession.

◧◩
20. b3mora+1kb[view] [source] [discussion] 2025-06-08 21:09:22
>>antifa+kg1
The stakes of changing the way so many people work can't be seen in a short term. Could be good or bad. Probably it will be both, in different ways. Margarine instead of butter seemed like a good idea until we noticed that hydrogenation was worse (in some ways) than the cholesterol problem we were trying to fight.

AI company execs also pretty clearly have a politico-economic idea that they are advancing. The tools may stand on their own but what is the broader effect of supporting them?

[go to top]