zlacker

[return to "Cursor's latest “browser experiment” implied success without evidence"]
1. paulus+0w[view] [source] 2026-01-16 17:04:21
>>embedd+(OP)
The blog[0] is worded rather conservatively but on Twitter [2] the claim is pretty obvious and the hype effect is achieved [2]

CEO stated "We built a browser with GPT-5.2 in Cursor"

instead of

"by dividing agents into planners and workers we managed to get them busy for weeks creating thousands of commits to the main branch, resolving merge conflicts along the way. The repo is 1M+ lines of code but the code does not work (yet)"

[0] https://cursor.com/blog/scaling-agents

[1] https://x.com/kimmonismus/status/2011776630440558799

[2] https://x.com/mntruell/status/2011562190286045552

[3]https://www.reddit.com/r/singularity/comments/1qd541a/ceo_of...

◧◩
2. deng+sx[view] [source] 2026-01-16 17:10:33
>>paulus+0w
Even then, "resolving merge conflicts along the way" doesn't mean anything, as there are two trivial merge strategies that are always guaranteed to work ('ours' and 'theirs').
◧◩◪
3. paulus+fA[view] [source] 2026-01-16 17:24:08
>>deng+sx
Haha. True, CI success was not part of PR accept criteria at any point.

If you view the PRs, they bundle multiple fixes together, at least according to the commit messages. The next hurdle will be to guardrail agents so that they only implement one task and don't cheat by modifying the CI piepeline

◧◩◪◨
4. former+NB[view] [source] 2026-01-16 17:31:24
>>paulus+fA
If I had a nickel for every time I've seen a human dev disable/xfail/remove a failing test "because it's wrong" and then proceeding to break production I would have several nickels, which is not much, but does suggest that deleting failing tests, like many behaviors, is not LLM-specific.
◧◩◪◨⬒
5. vizzie+TX[view] [source] 2026-01-16 18:58:05
>>former+NB
> but does suggest that deleting failing tests, like many behaviors, is not LLM-specific.

True, but it is shocking how often claude suggests just disabling or removing tests.

◧◩◪◨⬒⬓
6. icedch+RP1[view] [source] 2026-01-17 00:01:11
>>vizzie+TX
A coworker opened a PR full of AI slop. One of the first things I do is check if the tests pass. Of course, the didn't. I asked them to fix the tests, since there's no point in reviewing broken code.

"Fix the tests." This was interpreted literally, and assert status == 200 got changed to assert status == 500 in several locations. Some tests required more complex edits to make them "pass."

Inquiries about the tests went unanswered. Eventually the 2000 lines of slop was closed without merging.

◧◩◪◨⬒⬓⬔
7. saghm+9x2[view] [source] 2026-01-17 09:10:22
>>icedch+RP1
After a certain point the response to low effort vibe code has to be vibe reviews. Failing tests? Bad vibes, close without merging. Much more efficient than vibe coding too, since no AI is needed.
[go to top]