ever had a client second guess you by replying you a screenshot from GPT?
ever asked anything in a public group only to have a complete moron replying you with a screenshot from GPT or - at least a bit of effor there - a copy/paste of the wall of text?
no, people have no shame. they have a need for a little bit of (borrowed) self importance and validation.
Which is why i applaud every code of conduct that has public ridicule as punishment for wasting everybody's time
The client in your example isn't a (presumably) professional developer, submitting code to a public repository, inviting the scrutiny of fellow professionals and potential future clients or employers.
Unless I have been reading very different science fiction I think it’s definitely not that.
I think it’s more the confidence and seeming plausibility of LLM answers
But (as someone else described), GPTs and other current-day LLMs are probabilistic. But 99% of what they produce seems feasible enough.
They are sure they know better because they get a yes man doing their job for them.
So their boss may be naive, but not hilariously so - because that is, in fact, how the world works[1]! And as a boss, they probably have some understanding of it.
The thing they miss is that AI fundamentally[2] cannot provide this kind of "correct" output, and more importantly, that the "trillion dollar companies" not only don't guarantee that, they actually explicitly inform everyone everywhere, including in the UI, that the output may be incorrect.
So it's mostly failure to pay attention and realize they're dealing with an exception to the rule.
--
[0] - Actually hurt you, I'm ignoring all the fitness/healthy eating fads and "ultraprocessed food" bullshit.
[1] - On a related note, it's also something security people often don't get: real world security relies on being connected - via contracts and laws and institutions - to "men with guns". It's not perfect, but scales better.
[2] - Because LLMs are not databases, but - to a first-order approximation - little people on a chip!
I raise an issue or PR after carefully reviewing someone else's open source code.
They ask Claude to answer me; neither them nor Claude understood the issue.
Well, at least it's their repo, they can do whatever.
delicate feelers is like octopus arms
Cybersecurity is also an exception here.
"men with guns" only work for cases where the criminal must be in the jurisdiction of the crime for the crime to have occurred.
If you rob a bank in London, you must be in London, and the British police can catch you. If you rob a bank somebody else, the British police doesn't care. If you hack a bank in London though, you may very well be in North Korea.
E.g.
"A random drunk guy on the subway suggested that this wouldn't be a problem if we were running the latest SOL server version" "Huh, I guess that's worth testing"
We are currently facing a political climate trying to tear many of these safeguards down. Some people really think "caveat emptor" is some kind of natural, efficient, ideal way of life.
There's so much CYA because there is an A that needs C'ing
But yes, look at the US c.2025-6. As long as the leader sounds assertive, some people will eat the blatant lies that can be disproven even by the same AI tools they laud.
Maybe a million dollar company needs to be compliant. A billion dollar company can start to ward off any loopholes with lawsuits instead of compliance.
A trillion dollar company will simply change the law and fight governments over the law to begin with, rather than worrying about compliance.
Still, I meant that in the other direction: not request, but a gift/favor. "Guess culture" would be going out of your way to make the gift valuable for the receiver - matching what they need, and not generating extra burden. "Ask culture" would be like doing whatever's easiest that matches the explicit requirements, and throwing it over the fence.
This being a big part of the problem-- their false answers are more plausible and convincing then the truth. The output almost always seems feasible-- true or not is an entirely different matter.
Historically when most things fail they produce nonsense. If they do they are producing something related to the truth (but perhaps biased or mis-calibrated). LLM output can be both highly plausible and unrelated to reality.