zlacker

[return to "Tldraw pauses external contributions due to AI slop"]
1. sbonda+p5[view] [source] 2026-01-16 00:15:31
>>pranav+(OP)
Seems like reading the code is now the real work. AI writes PRs instantly but reviewing them still takes time. Everything flipped. Expect more projects to follow - maintainers can just use ai themselves without needing external contributions.
◧◩
2. bigstr+ci[view] [source] 2026-01-16 02:05:59
>>sbonda+p5
Understanding (not necessarily reading) always was the real work. AI makes people less productive because it's speeding up the thing that wasn't hard (generating code), while generating additional burden on the thing that was hard (understanding the code).
◧◩◪
3. Kronis+s82[view] [source] 2026-01-16 17:40:29
>>bigstr+ci
> AI makes people less productive because it's speeding up the thing that wasn't hard (generating code), while generating additional burden on the thing that was hard (understanding the code).

Only if the person doesn't want the AI to help in understanding how it works, in which case it doesn't matter whether they use AI or not (except without they couldn't push some slop out the door at all).

If you want that understanding, I find that AI is actually excellent with it, when given proper codebase search tools and an appropriately smart model (Claude Code, Codex, Gemini), easily browsing features that might have dozens of files making them up - which I would absolutely miss some details of in the case of enterprisey Java projects.

I think the next tooling revolution will probably be automatically feeding the model all of the information about how the current file fits within the codebase - not just syntax errors and automatically giving linter messages, but also dependencies, usages, all that.

In my eyes, the "ideal" code would be simple and intuitive enough to understand so that you don't actually need to spend hours to understand how a feature works OR use any sort of AI tool, or codebase visualization as a graph (dependency and usage tracking) or anything like that - it just seems that you can't represent a lot of problems like that easily, given time constraints and how badly Spring Boot et al fucks up any codebase it touches with accidental complexity.

But until then, AI actually helps, a lot. Maybe I just don't have enough working memory (or time) to go through 30 files and sit down and graph it out in a notebook like I used to, but in lieu of that an AI generated summary (alongside docs/code tests/whatever I can get, but seems like humans hate writing docs and ADRs, at least in the culture here) is good enough.

At the same time, AI will also happily do incomplete refactoring or not follow the standards of the rest of the codebase and invent abstractions where it doesn't need any, if you don't have the tooling to prevent it automatically, e.g. prebuild checks (or the ability to catch it yourself in code review). I think the issue largely is limited context sizes (without going broke) - if I could give the AI the FULL 400k SLoC codebase and the models wouldn't actually start breaking down at those context lengths, it'd be pretty great.

[go to top]