zlacker

[return to "Cursor's latest “browser experiment” implied success without evidence"]
1. pavlov+j91[view] [source] 2026-01-16 19:45:33
>>embedd+(OP)
The comment that points out that this week-long experiment produced nothing more than a non-functional wrapper for Servo (an existing Rust browser) should be at the top:

>>46649046

◧◩
2. pera+gl1[view] [source] 2026-01-16 20:41:47
>>pavlov+j91
Has anyone tried to rewrite some popular open source project with IA? I imagine modern LLMs can be very effective at license-washing/plagiarizing dependencies, it could be an interesting new benchmark too
◧◩◪
3. gorkae+Xv1[view] [source] 2026-01-16 21:40:52
>>pera+gl1
I think it's fair enough to consider porting a subset of rewriting, in which case there are several successful experiments out there:

- JustHTML [1], which in practice [2] is a port of html5ever [3] to Python.

- justjshtml, which is a port of JustHTML to JavaScript :D [4].

- MiniJinja [5] was recently ported to Go [6].

All three projects have one thing in common: comprehensive test suites which were used to guardrail and guide AI.

References:

1. https://github.com/EmilStenstrom/justhtml

2. https://friendlybit.com/python/writing-justhtml-with-coding-...

3. https://github.com/servo/html5ever

4. https://simonwillison.net/2025/Dec/15/porting-justhtml/

5. https://github.com/mitsuhiko/minijinja

6. https://lucumr.pocoo.org/2026/1/14/minijinja-go-port/

◧◩◪◨
4. daxfoh+3D1[view] [source] 2026-01-16 22:25:54
>>gorkae+Xv1
Interesting, IIUC the transformer architecture / attention mechanism were initially designed for use in the language translation domain. Maybe after peeling back a few layers, that's still all they're really doing.
◧◩◪◨⬒
5. nathan+CK1[view] [source] 2026-01-16 23:21:28
>>daxfoh+3D1
This has long been how I have explained LLMs to non-technical people: text transformation engines. To some extent, many common, tedious, activities basically constitute a transformation of text into one well known form from another (even some kinds of reasoning are this) and so LLMs are very useful. But they just transform text between well known forms.
◧◩◪◨⬒⬓
6. daxfoh+Xp3[view] [source] 2026-01-17 17:29:50
>>nathan+CK1
And while it appears that lots of problems can be contorted into translation, "if all you have is a hammer, everything looks like a nail". Maybe we do hit a brick wall unless we can come up with a model that more closely aligns with actual human reasoning.
[go to top]