Same story with data models, let's say you have the same data (customer contact details) in slightly different formats in 5 different data models. Which one is correct? Why are the others different?
Ultimately someone has to solve this mystery and that often means pulling people together from different parts of the business, so they can eventually reach consensus on how to move forward.
When an AI can email/message all the key people that have the institutional knowledge, ask them the right discovery questions (probably in a few rounds and working out which bits are human "hallucinations" that don't make sense). Collect that information and use it to create a solution. Then human jobs are in real trouble.
Until that AI is just a productivity boost for us.
How is an AI supposed to create documentation, except the most useless box-ticking kind? It only sees the existing implementation, so the best it can do is describe what you can already see (maybe with some stupid guesses added in).
IMHO, if you're going to use AI to "write documentation," that's disposable text and not for distribution. Let the next guy generate his own, and he'll be under no illusions about where the text he's reading came from.
If you're going to write documentation to distribute, you had better type out words from your own damn mind based on your own damn understanding with your own damn hands. Sure, use an LLM to help understand something, but if you personally don't understand, you're in no position to document anything.
There are plenty of workers who refuse to answer questions from a human until it’s escalated far enough up the chain to affect their paycheck / reputation. I’m sure that the intelligence is artificial will only multiply the disdain / noncompliance.
But then maybe there will be strategies for masking from where requests are coming, like a system that anonymizes all requests for information. Even so, I feel like there would still be a way that people would ping / walk up to their colleague in meatspace and say “hey that request came from me, thanks!”
And now there’s an example in the codebase of what not to do, and other AI sessions will see it, and follow that pattern blindly, and… well, we all know where this goes.
see, i actually read and monitor the outputs. i check them against my own internal knowledge. i trial the results with real trouble shooting and real bug fixes/feature requests.
when its wrong, i fix it. when its right, great we now have documentation where none existed before.
dogfood the documentation and you'll know if its worth using or not.
he AI is there to do the easy part; scan a giant spaghetti bowl and label each noodle. The humans job is to attach descriptions to those noodles.
Sometimes I forget that people on this site simply assume the worst in any given situation.
AI is incapable of capturing human context that 99.999% of the time exists in people's brains, not code or context. This is why it is crucial that humans write for humans, not an LLM that puts out docs that have the aesthetics of looking acceptable.