20 years ago, I worked the self-checkout registers in retail. I'd have people scan an item (With the obvious audible "BEEP"), and then stand there confused about what to do next. The machine is telling them "Please place the item in the bag" and they'd tell me they don't know what to do. I'd say "What's the machine telling you?" "'Please place the item in the bag'" "Okay, then place the item in the bag" "Oh, okay"
It's like they don't understand words if a computer is saying them. But if they're coming from a human, they understand just fine, even if it's the exact same words.
"Incorrect password. You may have made a mistake entering it. Please try entering it again." "I don't know what that means, I'm going to call up tech support and just say I'm getting an error when I try to log in."
I see this pretty often. These aren't even what should be called typical users in theory. They are people doing a technical job and were hired with technical requirements, an application will spit out a well written error message in the domain they should be professionals in and their brain turns off. And ya, it ends up in a call to me where I state the same thing and they figure the problem out.
I really don't get it.
I've seen this with gnss-assisted driving, or with automated driving, or with aircraft autopilot. Something disengages, gives unwarranted trust, we lose context, training fades ; and when thrown back in control, the avalanche of context and responsibility is overwhelming, compounded by the lack of context about the previous intermediate steps.
One of the most worrying dangers of automation, is this trust (even by supposed knowledgeable technicians) and the transition out of the 'the machine is perfect' and when it hands you back the helm on a failure, an inability to trust the machine again.
The way to avoid entering this state, seems to stay deeply engaged in the inputs and decisions of the system (read 'automation should be like iron man, not like ultron') and have a deep understanding of the moving parts, critical design decisions of the system, and traces/visualization/checklist of the intermediate steps.
I don't know where the corpus of research about this is (probably in safety engineering research tomes), but it crystallized for me when comparing the crew reactions and behaviour of the Rio-Paris Air France crash, and the Quantas A380 accident in Singapour.
For the first one, amongst many, many other errors (be it crew management, taking account of the weather...) and problematic sensor behaviour, the transcript tells a harrowing story of a crew not trusting their aircraft anymore after recovering from a sensor failure (that failure ejecting them from autopilot and giving them back mostly full control), ignoring their training, and many of the actual alarms the aircraft was rightly giving, blaring at them.
In the second case, a crew that tries to piece out what capabilities they still have after a massive engine failure (explosion), wrecking most of the other systems with shrapnel. And keeping enough in the loop to decide when the overwhelmed system is giving wrong sensor instructions (transfering fuel from the unaffected reservoirs to actually destroyed, leaky ones).
Human factor studies are often fascinating.
Also arguably the users are kind of right. An error indicates that a program has violated its invariants, which may lead to undefined behavior. Any output from a program after entering the realm of undefined behavior SHOULD be mistrusted, including error messages.