zlacker

[parent] [thread] 10 comments
1. dehugg+(OP)[view] [source] 2026-02-02 21:14:09
surprising considering you just listed two primary use cases (exploring codebases/data models + creating documentation)
replies(4): >>gmueck+N5 >>s5fs+z6 >>palmot+5x >>heavys+E61
2. gmueck+N5[view] [source] 2026-02-02 21:39:24
>>dehugg+(OP)
I don't find this surprising. Code and data models encode the results of accumulated business decisions, but nothing about the decision making process or rationale. Most of the time, this information is stored only in people's heads, so any automated tool is necessary blind.
replies(1): >>phatfi+bj
3. s5fs+z6[view] [source] 2026-02-02 21:41:53
>>dehugg+(OP)
Exploring a codebase tells you WHAT it's doing, but not WHY. In older codebases you'll often find weird sections of code that solved a problem that may or may not still exist. Like maybe there was an import process that always left three carriage returns at the end of each record, so now you got some funky "lets remove up to three carriage returns" function that probably isn't needed. But are you 100% sure it's not needed?

Same story with data models, let's say you have the same data (customer contact details) in slightly different formats in 5 different data models. Which one is correct? Why are the others different?

Ultimately someone has to solve this mystery and that often means pulling people together from different parts of the business, so they can eventually reach consensus on how to move forward.

replies(1): >>btown+Ki2
◧◩
4. phatfi+bj[view] [source] [discussion] 2026-02-02 22:28:37
>>gmueck+N5
This captures succinctly the one of the key issues with (current) AI actually solving real problems outside of small "sandboxes" where it has all the information.

When an AI can email/message all the key people that have the institutional knowledge, ask them the right discovery questions (probably in a few rounds and working out which bits are human "hallucinations" that don't make sense). Collect that information and use it to create a solution. Then human jobs are in real trouble.

Until that AI is just a productivity boost for us.

replies(1): >>datsci+v01
5. palmot+5x[view] [source] 2026-02-02 23:21:12
>>dehugg+(OP)
> creating documentation

How is an AI supposed to create documentation, except the most useless box-ticking kind? It only sees the existing implementation, so the best it can do is describe what you can already see (maybe with some stupid guesses added in).

IMHO, if you're going to use AI to "write documentation," that's disposable text and not for distribution. Let the next guy generate his own, and he'll be under no illusions about where the text he's reading came from.

If you're going to write documentation to distribute, you had better type out words from your own damn mind based on your own damn understanding with your own damn hands. Sure, use an LLM to help understand something, but if you personally don't understand, you're in no position to document anything.

replies(1): >>dehugg+5S3
◧◩◪
6. datsci+v01[view] [source] [discussion] 2026-02-03 02:14:53
>>phatfi+bj
The AI will also have to be trained to be diplomatic and maybe even cunning, because, as I can personally attest, answering questions from an AI is an extremely grating and disillusioning experience.

There are plenty of workers who refuse to answer questions from a human until it’s escalated far enough up the chain to affect their paycheck / reputation. I’m sure that the intelligence is artificial will only multiply the disdain / noncompliance.

But then maybe there will be strategies for masking from where requests are coming, like a system that anonymizes all requests for information. Even so, I feel like there would still be a way that people would ping / walk up to their colleague in meatspace and say “hey that request came from me, thanks!”

7. heavys+E61[view] [source] 2026-02-03 03:01:06
>>dehugg+(OP)
Please don't feed people LLM generated docs
replies(1): >>dehugg+1R3
◧◩
8. btown+Ki2[view] [source] [discussion] 2026-02-03 13:03:52
>>s5fs+z6
Adding that this just gets worse when databases are peppered with direct access by vibe-coded applications that don’t look at production data or gather these insights before deciding “yeah this sounds like the format of text that should go in the column with this name, and that’s the column I should use.”

And now there’s an example in the codebase of what not to do, and other AI sessions will see it, and follow that pattern blindly, and… well, we all know where this goes.

◧◩
9. dehugg+1R3[view] [source] [discussion] 2026-02-03 20:02:37
>>heavys+E61
i love the assumption by default that "ai generated" automatically excludes "human verified".

see, i actually read and monitor the outputs. i check them against my own internal knowledge. i trial the results with real trouble shooting and real bug fixes/feature requests.

when its wrong, i fix it. when its right, great we now have documentation where none existed before.

dogfood the documentation and you'll know if its worth using or not.

replies(1): >>heavys+TW4
◧◩
10. dehugg+5S3[view] [source] [discussion] 2026-02-03 20:06:16
>>palmot+5x
Whats with this assumption that there's no human involvement? I dont just say "hey scan this 2m loc repo and give me some docs'... that would be insane. T

he AI is there to do the easy part; scan a giant spaghetti bowl and label each noodle. The humans job is to attach descriptions to those noodles.

Sometimes I forget that people on this site simply assume the worst in any given situation.

◧◩◪
11. heavys+TW4[view] [source] [discussion] 2026-02-04 02:25:27
>>dehugg+1R3
Literally several times a week, I have to close PRs with docs that clearly no one read because they are blatantly wrong. This happened after LLMs. If what you're claiming is happening, I'm not seeing it anywhere.

AI is incapable of capturing human context that 99.999% of the time exists in people's brains, not code or context. This is why it is crucial that humans write for humans, not an LLM that puts out docs that have the aesthetics of looking acceptable.

[go to top]