I'm not sure I agree.
Reason ?
The old adage "handle errors gracefully".
The "gracefully" part, by definition means taking into account the UX.
Ergo "gracefully" does not mean spitting out either (a) a meaningless generic message or (b) A bunch of incomprehensible tech-speak.
Your error should provide (a) a user-friendly plain-English description and (b) an error ID that you can then cross-reference (e.g. you know "error 42" means the database connection is foobar because the password is wrong)
During your support interaction you can then guide the user through uploading logs or whatever. Preferably through an "upload to support" button you've already carefully coded into your app.
Even if your app is targetting a techie audience, its the same ethos.
If there is a possibility a techie could solve the problem themselves (e.g. by RTFM or checking the config file), then the onus is on you to provide a suitably meaningful error message to help them on their troubleshooting journey.
20 years ago, I worked the self-checkout registers in retail. I'd have people scan an item (With the obvious audible "BEEP"), and then stand there confused about what to do next. The machine is telling them "Please place the item in the bag" and they'd tell me they don't know what to do. I'd say "What's the machine telling you?" "'Please place the item in the bag'" "Okay, then place the item in the bag" "Oh, okay"
It's like they don't understand words if a computer is saying them. But if they're coming from a human, they understand just fine, even if it's the exact same words.
"Incorrect password. You may have made a mistake entering it. Please try entering it again." "I don't know what that means, I'm going to call up tech support and just say I'm getting an error when I try to log in."
I see this pretty often. These aren't even what should be called typical users in theory. They are people doing a technical job and were hired with technical requirements, an application will spit out a well written error message in the domain they should be professionals in and their brain turns off. And ya, it ends up in a call to me where I state the same thing and they figure the problem out.
I really don't get it.
In that case (and even sometimes in the more "graceful" cases), we don't always expect the user to know what an error message means.
I've seen this with gnss-assisted driving, or with automated driving, or with aircraft autopilot. Something disengages, gives unwarranted trust, we lose context, training fades ; and when thrown back in control, the avalanche of context and responsibility is overwhelming, compounded by the lack of context about the previous intermediate steps.
One of the most worrying dangers of automation, is this trust (even by supposed knowledgeable technicians) and the transition out of the 'the machine is perfect' and when it hands you back the helm on a failure, an inability to trust the machine again.
The way to avoid entering this state, seems to stay deeply engaged in the inputs and decisions of the system (read 'automation should be like iron man, not like ultron') and have a deep understanding of the moving parts, critical design decisions of the system, and traces/visualization/checklist of the intermediate steps.
I don't know where the corpus of research about this is (probably in safety engineering research tomes), but it crystallized for me when comparing the crew reactions and behaviour of the Rio-Paris Air France crash, and the Quantas A380 accident in Singapour.
For the first one, amongst many, many other errors (be it crew management, taking account of the weather...) and problematic sensor behaviour, the transcript tells a harrowing story of a crew not trusting their aircraft anymore after recovering from a sensor failure (that failure ejecting them from autopilot and giving them back mostly full control), ignoring their training, and many of the actual alarms the aircraft was rightly giving, blaring at them.
In the second case, a crew that tries to piece out what capabilities they still have after a massive engine failure (explosion), wrecking most of the other systems with shrapnel. And keeping enough in the loop to decide when the overwhelmed system is giving wrong sensor instructions (transfering fuel from the unaffected reservoirs to actually destroyed, leaky ones).
Human factor studies are often fascinating.
Even when error message was clearly understandable for my expertise, it took surprisingly long tome to switch from one mental activity - "Pay bills", to another - "Investigate technical problem". And you have to throw away all short memory to switch into another task. So all rumors about "stupid" users is direct consequence from how human mind works.
99% of the population have no idea what "Header size exceeded" means, so it absolutely is about understanding the message, if the devs expect people to read the error.
Also arguably the users are kind of right. An error indicates that a program has violated its invariants, which may lead to undefined behavior. Any output from a program after entering the realm of undefined behavior SHOULD be mistrusted, including error messages.
But I WOULD expect the user, when sending a message to support, to say they're getting a "Header size exceeded" error, rather than just say "an error".