zlacker

[parent] [thread] 39 comments
1. pera+(OP)[view] [source] 2025-06-03 11:23:32
It's fascinating how over the past year we have had almost daily posts like this one, yet from the outside everything looks exactly the same, isn't that very weird?

Why haven't we seen an explosion of new start-ups, products or features? Why do we still see hundreds of bug tickets on every issue tracking page? Have you noticed anything different on any changelog?

I invite tptacek, or any other chatbot enthusiast around, to publish project metrics and show some actual numbers.

replies(11): >>kubb+Qf >>simonw+si >>abe94+Hj >>tibors+gq >>tschel+qq >>deadma+xq >>aerhar+gQ >>chinch+x11 >>knallf+lg1 >>danShu+UM1 >>xk_id+gQ1
2. kubb+Qf[view] [source] 2025-06-03 13:16:43
>>pera+(OP)
Most likely there’s a slight productivity increase.

The enthusiasts have a cognitive dissonance because they are pretty sure this is huge and we’re living in the future, so they go through various denial strategies when the execs ask them where the money is.

In this case it’s blame. These darned skeptics are ruining it for everyone.

3. simonw+si[view] [source] 2025-06-03 13:32:15
>>pera+(OP)
"Why haven't we seen an explosion of new start-ups, products or features?"

You're posting this question on a forum hosted by YC. Here's a story from March 2024: "YC’s latest W24 batch includes 240 companies. A significant portion of the companies have some AI component, with 63% tagged as “Artificial Intelligence” — a notable increase from 51% in the preceding S23 batch and 29% before that.". https://jamesin.substack.com/p/analysis-of-ycs-latest-w24-ba...

I've not seen the same analysis for more recent batches.

replies(2): >>pera+er >>maplan+Ku
4. abe94+Hj[view] [source] 2025-06-03 13:38:17
>>pera+(OP)
i don't know - I agree we haven't seen changes to our built environment, but as for an "explosion of new start-ups, products" we sort of are seeing that?

I see new AI assisted products everyday, and a lot of them have real usage. Beyond the code-assistants/gen companies which are very real examples, here's an anecdote.

I was thinking of writing a new story, and found http://sudowrite.com/ via an ad, an ai assistant for helping you write, its already used by a ton of journalists and serious writers, and am trying it out.

Then i wanted to plan a trip - tried google but saw nothing useful, and then asked chatgpt and now have a clear plan

replies(1): >>creesc+dy
5. tibors+gq[view] [source] 2025-06-03 14:14:04
>>pera+(OP)
Maybe because these companies are smaller and fly under the radar. They require less funding team size is small, probably bankrolled by the founders.

At least that's what I do and what I see among friends.

6. tschel+qq[view] [source] 2025-06-03 14:15:26
>>pera+(OP)
One of our AI enabled internal projects is moving ~135 faster than before. Of course you can't perfectly compare. New framework, better insights, updated libraries etc.
replies(1): >>mrkeen+kx
7. deadma+xq[view] [source] 2025-06-03 14:15:56
>>pera+(OP)
Your argument relies on the idea of an "actual product", what is happening—and I’m seeing it firsthand both in my company’s codebase and in my personal projects—is that AI is contributing more and more to product development. If this trend continues, we may reach a point where 90% of a product is written by AI.

At that stage, the real value will lie in the remaining 10%—the part that requires human judgment, creativity, or architectural thinking. The rest will be seen as routine: simple instructions, redundant CRUD operations, boilerplate, and glue code.

If we focus only on the end result, human will inevitably write less code overall. And writing less code means fewer programming jobs.

replies(1): >>creesc+bv
◧◩
8. pera+er[view] [source] [discussion] 2025-06-03 14:22:11
>>simonw+si
Sorry I don't follow, would you mind clarifying your point?
replies(2): >>creesc+Mt >>simonw+uB
◧◩◪
9. creesc+Mt[view] [source] [discussion] 2025-06-03 14:38:06
>>pera+er
Not the person you are asking to clarify. But, I can read it in two ways:

1. A huge part of the demographic group visiting HN is biased in favor of AI given the sort of startups YN decides to fund.

2. The large amount of start-ups funded by HN that are related to AI should answer your question.

I am slightly leaning towards the first one combined with a little bit of the latter one. A lot of people working in startups will be used to building up a structure from scratch where incorporating the latest "thing" is not that big of a deal. It also means they rarely see the long term impact of the code they write.

They have a huge blind spot for the reality of existing code bases and company structures where introducing these tools isn't as easy and code needs to be maintained for much longer.

◧◩
10. maplan+Ku[view] [source] [discussion] 2025-06-03 14:43:39
>>simonw+si
I don't think that refutes the parent's point. So many AI companies, but where are the companies _using_ the AI?
replies(1): >>ndiddy+8w
◧◩
11. creesc+bv[view] [source] [discussion] 2025-06-03 14:46:54
>>deadma+xq
You said a bunch without saying much. It also doesn't track. If the majority of AI work is supposed to be done by agents, capable of doing the entire process including making PRs. Then, why isn't there an explosion in such PRs on a large amount of open source projects? Even more so, why am I not seeing these PRs on AI related open source projects? If I need to target it even more directly, why am I a not seeing hints of this being applied on code agent repositories?

Call me naive, but you'd think that these specifically want to demonstrate how well their product works. Making an effort to distinguish PRs that are largely the work of their own agents. Yet, I am not seeing that.

I have no doubt that people find use in some aspects of these tools. Though I personally more subscribe to the interactive rubber ducky usage of them. But 90% from where I am standing seems like a very, very far way off.

replies(3): >>philot+vd1 >>dabock+pj1 >>jama21+375
◧◩◪
12. ndiddy+8w[view] [source] [discussion] 2025-06-03 14:53:01
>>maplan+Ku
It's also interesting how when you look at the websites for the new wave of AI B2B SaaS startups, most of the customers they list are other AI B2B SaaS startups. It makes me wonder how much of the "AI industry" is just startups sending VC money back and forth to each other.
replies(1): >>dabock+Gf1
◧◩
13. mrkeen+kx[view] [source] [discussion] 2025-06-03 14:59:50
>>tschel+qq
Bless your metric.

If you end up finishing it in 6 months, are you going to revise that estimate, or celebrate the fact that you don't need to wait until 2092 to use the project?

replies(2): >>DirkH+YM >>0x500x+wc2
◧◩
14. creesc+dy[view] [source] [discussion] 2025-06-03 15:06:51
>>abe94+Hj
> I was thinking of writing a new story, and found http://sudowrite.com/ via an ad, an ai assistant for helping you write, its already used by a ton of journalists and serious writers, and am trying it out.

I am not seeing anything indicating it is actually used by a ton of journalists and serious writers. I highly doubt it is, the FAQ is also paper thin in as far as substance goes. I highly doubt they are training/hosting their own models yet I see only vague third party references in their privacy policy. Their pricing is less than transparent given that they don't really explain how their "credits" translate to actual usage. They blatantly advertise this to be for students, which is problematic in itself.

This ignores all the other issues around so heavily depending on LLMs for your writing. This is an interesting quirk for starters: https://www.theguardian.com/technology/2024/apr/16/techscape... . But there are many more issues about relying so heavily on LLM tools for writing.

So this example, to me, is actually exemplifying the issue of overselling capabilities while handwaving away any potential issues that is so prevalent in the AI space.

replies(1): >>supera+tS
◧◩◪
15. simonw+uB[view] [source] [discussion] 2025-06-03 15:27:28
>>pera+er
You said "Why haven't we seen an explosion of new start-ups?" so I replied by pointing out that a sizable percentage of recent YC batches are new AI startups.

I categorize that as "an explosion", personally. Do you disagree?

replies(2): >>pera+rM >>mattma+FU
◧◩◪◨
16. pera+rM[view] [source] [discussion] 2025-06-03 16:26:45
>>simonw+uB
Yeah I don't agree that having more start-ups checking a box saying that at least one of their component uses some form of AI indicates that we are experiencing a surge in new start-ups.

The amount of start-ups getting into YC hasn't really changed YoY: The W23 batch had 282 companies and W24 260.

replies(1): >>simonw+nX
◧◩◪
17. DirkH+YM[view] [source] [discussion] 2025-06-03 16:29:19
>>mrkeen+kx
Lol, I appreciate your reality check
18. aerhar+gQ[view] [source] 2025-06-03 16:49:58
>>pera+(OP)
This is an important question. The skepticism tracks with my personal experience - I feel 10-20% more productive but certainly not 5x when measured over a long period of time (say, the last 6 months or more)

I’m nonetheless willing to be patient and see how it plays out. If I’m skeptical about some grandiose claims I must also be equally skeptical and accepting about the possibility of large scale effects happening but not being apparent to me yet.

replies(1): >>novaRo+sd1
◧◩◪
19. supera+tS[view] [source] [discussion] 2025-06-03 17:01:01
>>creesc+dy
Hey co-founder of Sudowrite here. We indeed have thousands of writers paying for and using the platform. However, we aim to serve professional novelists, not journalists or students. We have some of both using it, but it's heavily designed and priced for novelists making a living off their work.

We released our own fiction-specific model earlier this year - you can read more it at https://www.sudowrite.com/muse

A much-improved version 1.5 came out today -- it's preferred 2-to-1 vs Claude in blind tests with our users.

You're right on the faq -- alas, we've been very product-focused and haven't done the best job keeping the marketing site up to date. What questions do you wish we'd answer there?

replies(1): >>creesc+R51
◧◩◪◨
20. mattma+FU[view] [source] [discussion] 2025-06-03 17:12:59
>>simonw+uB
I'd disagree, because you've misunderstood his point.

His point is that if AI were so great, loads of NON AI startups would be appearing because the cost to make a company should have dramatically dropped and new opportunities for disruption in existing businesses should be available.

His point is that they aren't.

You pointing at AI startups in YC actually highlights the opposite of what you think it does. People are still looking for the problems for AI to solve, not solving problems with AI.

Your example is actually a bell-weather that there is no great leap forward yet, otherwise the companies delivering real world value would be taking spots in YC from the AI tooling companies. Because they'd be disrupting existing businesses and making lots of money, instead of trying to sell AI tools.

It's like you pointing at the large batches of YC companies doing crypto 5/10 years ago and saying that proves crypto is a game changer and everyone would soon be using crypto in their development.

The YC companies are focused on the AI tool hype, not making money by solving real world problems.

replies(1): >>simonw+BX
◧◩◪◨⬒
21. simonw+nX[view] [source] [discussion] 2025-06-03 17:29:44
>>pera+rM
The YC acceptance number is a measure of how many startups YC has decided they can handle in a single batch. I think percentage of accepted startups doing X carries more information than total number of startups accepted as a whole.
replies(1): >>danShu+bL1
◧◩◪◨⬒
22. simonw+BX[view] [source] [discussion] 2025-06-03 17:30:47
>>mattma+FU
Yeah, I agree. I don't think the business value of this new AI stuff has been unlocked yet. I think it will take a couple more years for the best practices for applying this stuff in an economically valuable way to be a) figured out and b) filter out to the rest of the economy.
23. chinch+x11[view] [source] 2025-06-03 17:54:12
>>pera+(OP)
and builder.ai just filed for bankruptcy after a billion dollar valuation. Timely.
◧◩◪◨
24. creesc+R51[view] [source] [discussion] 2025-06-03 18:18:38
>>supera+tS
> We indeed have thousands of writers paying for and using the platform. However, we aim to serve professional novelists, not journalists or students. We have some of both using it

Your marketing material quotes a lot of journalists, giving the impression they too use it a lot. I have my reservations about LLMs being used for professional writing, but for the moment I'll assume that Muse handles a lot of those concerns perfectly. I'll try to focus on the more immediate and actual concerns.

Your pricing specifically has a "Hobby & Student" section which mentions "Perfect for people who write for fun or for school". This is problematic to me, I'll get to why later when I answer you question about things missing from the FAQ.

> What questions do you wish we'd answer there?

Well it would be nice if you didn't hand wave away some actual potential issues. The FAQ also reads more like loose marketing copy than a policy document.

- What languages does Sudowrite work in?

Very vague answer here. Just be honest and say it highly depends on the amount of source material and that for many languages the result likely will be not that good.

- Is this magic?

Cute, but doesn't belong in a FAQ

- Can Sudowrite plagiarize?

You are doing various things here that are disingenuous.

You basically talk around the issue by saying "well, next word prediction isn't exactly plagiarism". To me it strongly suggests the models used have been trained on material that you can plagiarize. Which in itself is already an issue.

Then there is the blame shifting to the user saying that it is up to the user to plagiarize or not. Which is not honest, the user has no insights in the training material.

"As long as your own writing is original, you'll get more original writing out of Sudowrite." This is a probabilistic statement, not a guarantee. It also, again is blame shifting.

- Is this cheating?

Way too generic. Which also brings me to you guys actively marketing to students. Which I feel is close to moral bankruptcy. Again, you sort of talk around the issue are basically saying "it isn't cheating as long as you don't use it to cheat". Which is technically true, but come on guys...

In many contexts (academia, specific writing competitions, journalism), using something like sudowrite to generate or significantly augment text would be considered cheating or against guidelines, regardless of intent. In fact, in many school and academic settings using tools like these is detrimental to what they are trying to achieve by having the students write their own text from scratch without aid.

- What public language models does Sudowrite use and how were they trained?

Very vague, it also made me go back to the privacy policy you guys have in place. Given you clearly use multiple providers. I then noticed it was last updated in 2020? I highly doubt you guys have been around for that long, making me think it was copy pasted from elsewhere. This shows as it says "Policy was created with WebsitePolicies." which just makes this generic boilerplate. This honestly makes me wonder how much of it is abided by.

It being so generic also means the privacy policy does not clearly mention these providers while effectively all data from users likely goes to them.

*This is just about current questions in the FAQ*. The FAQ is oddly lacking in regards to muse, some of it is on the muse page itself. But that there I am running in similar issues

- Ethically trained on fiction Muse is exclusively trained on a curated dataset with 100% informed consent from the authors.

Bold and big claim. I applaud it if true, but there is no way to verify other than trusting your word.

There is a lot more I could expand on. But to be frank, that is not my job. You are far from the only AI related service operating in this problematic way. It might even run deeper in general startup culture. But honestly, even if your service is awesome and ethically entirely sound I don't feel taken seriously by the publicly information you provide. It is almost if you are afraid to be real with customers, to me you are overselling and overhyping. Again, you are far from the only company doing so, you just happened to be brought up by the other user.

replies(1): >>supera+tn1
◧◩
25. novaRo+sd1[view] [source] [discussion] 2025-06-03 19:04:20
>>aerhar+gQ
There were many similar transformations in recent decades. I remember first Windows with true Graphics User Interface was big WOW: productivity boost, you can have all those windows and programs running at the same time! Compare it with DOS where you normally had just one active user-facing process.
◧◩◪
26. philot+vd1[view] [source] [discussion] 2025-06-03 19:04:43
>>creesc+bv
From what I've heard anecdotally, there have been a bunch more PRs and bug reports generated by AI. But I've also heard they're generally trash and just wasting the project maintainers' time.
◧◩◪◨
27. dabock+Gf1[view] [source] [discussion] 2025-06-03 19:18:19
>>ndiddy+8w
Probably a large part of it, honestly. I was just talking with someone the other day about how the whole cloud economy for the past 15 years may have been grossly exaggerated by very loud VC backed companies flooding the internet with blog posts like this one. The big behemoths - hospitals, governments, law offices, manufacturers - those people tend to run a lot of local tech for various reasons. And they’re also the most quiet when it comes to the internet.
replies(1): >>comput+oo4
28. knallf+lg1[view] [source] 2025-06-03 19:22:01
>>pera+(OP)
> Why haven't we seen an explosion of new start-ups, products or features? Why do we still see hundreds of bug tickets on every issue tracking page? Have you noticed anything different on any changelog?

In my personal experience (LLM and code suggestion only) it's because I use LLMs to code unimportant stuff. Actually thinking what I want to do with the business code is exhausting and I'd rather play a little with a fun project. Also, the unit tests that LLMs can now write (and which were too expensive to write myself) were never important to begin with.

◧◩◪
29. dabock+pj1[view] [source] [discussion] 2025-06-03 19:40:20
>>creesc+bv
> Then, why isn't there an explosion in such PRs on a large amount of open source projects?

People don't like working for free, either by themselves or with an AI agent.

replies(2): >>Draike+hu1 >>creesc+pu1
◧◩◪◨⬒
30. supera+tn1[view] [source] [discussion] 2025-06-03 20:02:06
>>creesc+R51
Wow, so many assumptions here that don't make sense to me, but I realize we all have different perspectives on this stuff. Thank you for sharing yours! I really do appreciate it.

I won't go line-by-line here defending the cutesy copy and all that since it's not my job to argue with people on the internet either… but on a few key points that interested me:

- language support: I don't believe we're being disingenuous. Sudowrite works well in many languages. We have authors teaching classes on using Sudowrite in multiple languages. In fact, there's one on German tomorrow and one on French next week: https://lu.ma/sudowrite Our community runs classes nearly every day.

- student usage - We do sometimes offer a student discount when people write in to ask for it, and we've had multiple collage and high school classes use sudowrite in writing classes. We'll often give free accounts to the class when professors reach out. I don't believe AI use in education is unethical. I think AI as copilot is the future of most creative work, and it will seem silly for teachers not to incorporate these tools in the future. Many already are! All that said, we do not market to students as you claim. Not because we think it's immoral -- we do not -- but because we think they have better options. ChatGPT is free, students are cheap. We make a professional tool for professional authors and it is not free nor cheap. It would not make sense for our business to market to students.

- press quotes -- Yes, we quote journalists because they're the ones who've written articles about us. You can google "New Yorker sudowrite" etc and see the articles. Some of those journalists also write fiction -- that one who wrote the New Yorker feature had a book he co-wrote with AI reviewed in The New York Times.

> I then noticed it was last updated in 2020? I highly doubt you guys have been around for that long

So many of these objections feel bizarre to me because they're trivial to fact-check. Here's a New York Times article that mentions us, written in 2020. We were one of the first companies to use LLMs in this wave and sought and gained access to GPT-3 prior to public API availability. https://www.nytimes.com/2020/11/24/science/artificial-intell...

replies(1): >>creesc+2u1
◧◩◪◨⬒⬓
31. creesc+2u1[view] [source] [discussion] 2025-06-03 20:40:09
>>supera+tn1
> Wow, so many assumptions here that don't make sense to me, but I realize we all have different perspectives on this stuff.

I realize they don't make sense to you, otherwise the website would contain different information. If I had to try to frame it more clearly I'd say that for a company whose core product revolves around clear writing, your website's information is surprisingly vague and evasive in some areas. I simply think it would make for a stronger and confident message if that information was just there. Which, might I remind you, I have said is true for many companies selling LLM based services and products.

> language support: I don't believe we're being disingenuous. Sudowrite works well in many languages.

I am sure it does, those languages with the highest presence in the training data. French and German doing well doesn't surprise me given the numbers I have seen there. I think this FAQ section could be much clearer here.

> we do not market to students as you claim.

I guess that your pricing page specifically has a "Hobby & Student" tier which mentions "Perfect for people who write for fun or for school" doesn't count as marketing to students?

> I don't believe AI use in education is unethical.

Neither do I, if it is part of the curriculum and the goal. For many language related course including writing using assistive tooling, certainly tooling that highly impacts style defeats the didactic purpose.

> So many of these objections feel bizarre to me because they're trivial to fact-check. Here's a New York Times article that mentions us, written in 2020. We were one of the first companies to use LLMs in this wave and sought and gained access to GPT-3 prior to public API availability.

Okay, I already went out of my way to go over the entire website because you asked. I am not doing a hit piece on you guys specifically as I specifically said you just happened to be linked by the other person. It was an assumption on my side, but reasonable given the age of most LLM companies. More importantly, that is not the main point I am making there anyway.

Since 2020 the landscape around LLMs changed drastically. Including the way privacy is being handled and looked at. You would think that this would result in changes to the policy in that period. In fact, I would think that the introduction of your own model would at the very least warrant some changes there. Not to mention that using copy pasted boilerplate for 5 years to me does not give a confident signal about how seriously you are taking privacy.

While you are not obligated to respond to me as I am just one random stranger on the internet. I would be remiss if I didn't make it clear that it is the overall tone and combined points that make me critical. Not just the ones that piqued your interest.

◧◩◪◨
32. Draike+hu1[view] [source] [discussion] 2025-06-03 20:41:45
>>dabock+pj1
The entire open source community disagrees.
◧◩◪◨
33. creesc+pu1[view] [source] [discussion] 2025-06-03 20:42:49
>>dabock+pj1
1) Open source projects see plenty of commits where people happily work for "free".

2) Did you stop reading after that sentence? Because there is a whole lot more that follows, specifically:

> If I need to target it even more directly, why am I a not seeing hints of this being applied on code agent repositories? Call me naive, but you'd think that these specifically want to demonstrate how well their product works. Making an effort to distinguish PRs that are largely the work of their own agents. Yet, I am not seeing that.

◧◩◪◨⬒⬓
34. danShu+bL1[view] [source] [discussion] 2025-06-03 22:38:50
>>simonw+nX
> The YC acceptance number is a measure of how many startups YC has decided they can handle in a single batch.

Agreed.

> I think percentage of accepted startups doing X carries more information than total number of startups accepted as a whole.

Disagreed, I think it carries almost no information at all. The only thing your analysis signifies is that companies believe that using AI will make YCombinator more likely to fund them. They are competing for a limited number of slots and looking for ways to appeal to YCombinator. And the only thing that signifies is that YCombinator, specifically, likes funding AI startups right now.

This is not surprising. If you're an investment firm, funding AI companies is a good bet, in the same way that funding Web3 firms used to be a genuinely good bet. Investors ride market trends that are likely to make a company balloon in value or get bought out by a larger company. Investors are not optimizing for "what redefines the industry"; investors optimize to make a return on investment. And those are two entirely different things.

But it's also not surprising given YCombinator's past - the firm has always kind of gravitated towards hype cycles. It would be surprising if YCombinator wasn't following a major tech trend.

If you want evidence that we're seeing an explosion of companies, you need to look at something much more substantial than "YCombinator likes them".

And that's especially given the case that OP wasn't asking "is the tech industry gravitating towards AI?" They were asking, "are we seeing an explosion of new economic activity?"

And frankly, we're not. There are a lot of reasons for that which could have nothing to do with AI (tariffs and general market trends are probably a bigger issue). But we really aren't seeing the kind of transformation that is being talked about.

35. danShu+UM1[view] [source] 2025-06-03 22:55:07
>>pera+(OP)
So as an example of what this could look like that would be convincing to me. I started out pretty firmly believing that Rust was a fad.

Then Mozilla and Google did things with it that I did not think were possible for them to do. Not "they wrote a bunch of code with it", stuff like "they eliminated an entire class of bugs from a section of their codebase."

Then I watched a bunch of essentially hobby developers write kernel drivers for brand new architecture, and watched them turn brand new Macbooks into one of the best-in-class ways to run Linux. I do not believe they could have done that with their resources at that speed, using C or C++.

And at that point, you kind of begrudgingly say, "okay, I don't know if I like this, but fine, heck you, whatever. I guess it might genuinely redefine some parts of software development, you win."

So this is not impossible. You can convince devs like me that your tools are real and they work.

And frankly, there are a billion problems in modern computing that are high impact - stuff like Gnome accessibility, competitive browser engines, FOSS UX, collaboration tools. Entire demographics who have serious problems that could be solved by software if there was enough expertise and time and there were resources to solve them. Often, the issue at play is that there is no intersection between people who are very well acquainted with those communities and understand their needs, and people who have experience writing software.

In theory, LLMs help solve this. In theory. If you're a good programmer, and suddenly you have a tool that makes you 4x as productive as a developer: you could have a very serious impact on a lot of communities right now. I have not seen it happen. Not in the enterprise world, but also not in the FOSS world, not in communities with lower technical resources, not in the public sector. And again, I can be convinced by this, I have dismissed tools that I later switched opinions on because I saw the impact and I couldn't ignore the impact: Rust, NodeJS, Flatpak, etc, etc.

The problem is people have been telling me that Coding Assistants (and now Coding Agents) are one of those tools for multiple years now, and I'm still waiting to see the impact. I'm not waiting to see how many companies pick them up, I'm not waiting to see the job market. I'm waiting to see if this means that real stuff starts getting written at a higher quality significantly faster, and I don't see it.

I see a lot of individual devs showing me hobby projects, and a lot of AI startups, and... frankly, not much else.

36. xk_id+gQ1[view] [source] 2025-06-03 23:28:07
>>pera+(OP)
Simply put, if we’re living during such a major technological revolution, why does using software suck in such disastrous ways that were unthinkable even ten years ago?
◧◩◪
37. 0x500x+wc2[view] [source] [discussion] 2025-06-04 04:28:36
>>mrkeen+kx
Yeah, these 100x numbers being thrown out are pretty wild. It dawned on me during the Shopify CEO post awhile back. 100x is unfathomable!

You did a years worth of work in 3 days? That is what 100x means.

◧◩◪◨⬒
38. comput+oo4[view] [source] [discussion] 2025-06-04 21:18:35
>>dabock+Gf1
That sounds like "working as intended." The main selling points of the cloud are liquidity and accessibility. VC-backed startups are basically the target demographic.
◧◩◪
39. jama21+375[view] [source] [discussion] 2025-06-05 04:56:06
>>creesc+bv
More than likely loads of the PR’s you see _are_ mostly AI work, you just don’t know that because the developers cleaned it up and just post it as their own. Most PR’s where I work are like this, from what I see from speaking to the developers.
replies(1): >>creesc+Q26
◧◩◪◨
40. creesc+Q26[view] [source] [discussion] 2025-06-05 14:22:41
>>jama21+375
We must be moving in different circles as I am not seeing the same. Even if I went along with that reasoning it ignores the lack of highly visible work on projects that would want to advertise the effectiveness of their own tooling.

As I already said, I see a distinct lack of such labeled activity on open source ai code tools.

You'd think that those projects creating agentic tooling would want to show how effective they are. In fact, I would expect the people behind such projects to be all over threads like this pointing to tangible PRs, commits and other tasks these agents can apparently do so well.

Yet, all I am getting as pushback is vague handwaving "trust me, I am seeing it" claims. Even the blog post itself is nothing but that.

[go to top]