zlacker

[return to "Cursor's latest “browser experiment” implied success without evidence"]
1. paulus+0w[view] [source] 2026-01-16 17:04:21
>>embedd+(OP)
The blog[0] is worded rather conservatively but on Twitter [2] the claim is pretty obvious and the hype effect is achieved [2]

CEO stated "We built a browser with GPT-5.2 in Cursor"

instead of

"by dividing agents into planners and workers we managed to get them busy for weeks creating thousands of commits to the main branch, resolving merge conflicts along the way. The repo is 1M+ lines of code but the code does not work (yet)"

[0] https://cursor.com/blog/scaling-agents

[1] https://x.com/kimmonismus/status/2011776630440558799

[2] https://x.com/mntruell/status/2011562190286045552

[3]https://www.reddit.com/r/singularity/comments/1qd541a/ceo_of...

◧◩
2. deng+sx[view] [source] 2026-01-16 17:10:33
>>paulus+0w
Even then, "resolving merge conflicts along the way" doesn't mean anything, as there are two trivial merge strategies that are always guaranteed to work ('ours' and 'theirs').
◧◩◪
3. paulus+fA[view] [source] 2026-01-16 17:24:08
>>deng+sx
Haha. True, CI success was not part of PR accept criteria at any point.

If you view the PRs, they bundle multiple fixes together, at least according to the commit messages. The next hurdle will be to guardrail agents so that they only implement one task and don't cheat by modifying the CI piepeline

◧◩◪◨
4. former+NB[view] [source] 2026-01-16 17:31:24
>>paulus+fA
If I had a nickel for every time I've seen a human dev disable/xfail/remove a failing test "because it's wrong" and then proceeding to break production I would have several nickels, which is not much, but does suggest that deleting failing tests, like many behaviors, is not LLM-specific.
◧◩◪◨⬒
5. vizzie+TX[view] [source] 2026-01-16 18:58:05
>>former+NB
> but does suggest that deleting failing tests, like many behaviors, is not LLM-specific.

True, but it is shocking how often claude suggests just disabling or removing tests.

◧◩◪◨⬒⬓
6. ewoodr+f12[view] [source] 2026-01-17 01:53:11
>>vizzie+TX
The sneaky move that I hate most is when Claude (and does seem to mostly be a Claude-ism I haven’t encountered on GPT Codex or GLM) is when dealing with an external data source (API, locally polling hardware, etc) as a “helpful” fallback on failures it returns fake data in the shape of the expected output so that the rest of the code “works”.

Latest example is when I recently vibe coded a little Python MQTT client for a UPS connected to a spare Raspberry Pi to use with Home Assistant, and with a just few turns back and forth I got this extremely cool bespoke tool and felt really fun.

So I spent a while customizing how the data displayed on my Home Assistant dashboard and noticed every single data point was unchanging. It took a while to realize because the available data points wouldn’t be expected to change a whole lot on a fully charged UPS but the voltage and current staying at the exact same value to a decimal place for three hours raised my suspicions.

After reading the code I discovered it had just used one of the sample command line outputs from the UPS tool I gave it to write the CLI parsing logic. When an exception occurred in the parser function it instead returned the sample data so the MQTT portion of the script could still “work”.

Tbf Claude did eventually get it over the finish line once I clarified that yes, using real data from the actual UPS was in fact an important requirement for me in a real time UPS monitoring dashboard…

◧◩◪◨⬒⬓⬔
7. teifer+Ey2[view] [source] 2026-01-17 09:25:10
>>ewoodr+f12
Always check the code.

It's similar to early versions of autonomous driving. You's not want to sit in the back seat with nobody at the wheel. That would get you killed guaranteed.

◧◩◪◨⬒⬓⬔⧯
8. DonHop+1N2[view] [source] 2026-01-17 12:25:39
>>teifer+Ey2
And how is that not good for humanity in an evolutionary sense (as long as it doesn't kill or maim anyone else)?

Tesla owner keeps using Autopilot from backseat—even after being arrested:

https://mashable.com/article/tesla-autopilot-arrest-driving-...

[go to top]