zlacker

[return to "Why users cannot create Issues directly"]
1. ok1234+Mh[view] [source] 2026-01-02 04:29:02
>>xpe+(OP)
100% agree.

If it's someone else's project, they have full authority to decide what is and isn't an issue. With large enough projects, you're going to have enough bad actors, people who don't read error messages, and just downright crazy people. Throw in people using AI for dubious purposes like CVE inflation, and it's even worse.

◧◩
2. Sohcah+XU1[view] [source] 2026-01-02 18:20:59
>>ok1234+Mh
> people who don't read error messages

One of my pet peeves that I will never understand.

I do not expect users to understand what an error means, but I absolutely expect them to tell me what the error says. I try to understand things from the perspective of a non-technical user, but I cannot fathom why even a non-technical user would think that they don't need to include the contents of an error message when seeking help regarding the error. Instead, it's "When I do X, I get an error".

Maybe I have too much faith in people. I've seen even software engineers become absolutely blind when dealing with errors. I had a time 10 years ago as a tester when I filed a bug ticket with explicit steps that results in a "broken pipe error". The engineer closed the ticket as "Can Not Reproduce" with a comment saying "I can't complete your steps because I'm getting a 'broken pipe error'".

◧◩◪
3. tracer+072[view] [source] 2026-01-02 19:32:55
>>Sohcah+XU1
> I do not expect users to understand what an error means

I'm not sure I agree.

Reason ?

The old adage "handle errors gracefully".

The "gracefully" part, by definition means taking into account the UX.

Ergo "gracefully" does not mean spitting out either (a) a meaningless generic message or (b) A bunch of incomprehensible tech-speak.

Your error should provide (a) a user-friendly plain-English description and (b) an error ID that you can then cross-reference (e.g. you know "error 42" means the database connection is foobar because the password is wrong)

During your support interaction you can then guide the user through uploading logs or whatever. Preferably through an "upload to support" button you've already carefully coded into your app.

Even if your app is targetting a techie audience, its the same ethos.

If there is a possibility a techie could solve the problem themselves (e.g. by RTFM or checking the config file), then the onus is on you to provide a suitably meaningful error message to help them on their troubleshooting journey.

◧◩◪◨
4. Sohcah+kz2[view] [source] 2026-01-02 22:23:52
>>tracer+072
There are people that when using a computer, if anything goes remotely wrong, they completely lose all notions of language comprehension. You can make messages as non-technical as possible and provide troubleshooting steps, and they just throw their hands up and say "I'm not a computer person! I don't know what it's telling me!"

20 years ago, I worked the self-checkout registers in retail. I'd have people scan an item (With the obvious audible "BEEP"), and then stand there confused about what to do next. The machine is telling them "Please place the item in the bag" and they'd tell me they don't know what to do. I'd say "What's the machine telling you?" "'Please place the item in the bag'" "Okay, then place the item in the bag" "Oh, okay"

It's like they don't understand words if a computer is saying them. But if they're coming from a human, they understand just fine, even if it's the exact same words.

"Incorrect password. You may have made a mistake entering it. Please try entering it again." "I don't know what that means, I'm going to call up tech support and just say I'm getting an error when I try to log in."

◧◩◪◨⬒
5. pixl97+Q73[view] [source] 2026-01-03 02:56:19
>>Sohcah+kz2
>completely lose all notions of language comprehension

I see this pretty often. These aren't even what should be called typical users in theory. They are people doing a technical job and were hired with technical requirements, an application will spit out a well written error message in the domain they should be professionals in and their brain turns off. And ya, it ends up in a call to me where I state the same thing and they figure the problem out.

I really don't get it.

◧◩◪◨⬒⬓
6. touist+w54[view] [source] 2026-01-03 12:58:25
>>pixl97+Q73
I think it's something to do with the expectations of automation. We seem to be wired or trained to trust the machines fully, and enter a state of helplessness when we think we are driven by a machine.

I've seen this with gnss-assisted driving, or with automated driving, or with aircraft autopilot. Something disengages, gives unwarranted trust, we lose context, training fades ; and when thrown back in control, the avalanche of context and responsibility is overwhelming, compounded by the lack of context about the previous intermediate steps.

One of the most worrying dangers of automation, is this trust (even by supposed knowledgeable technicians) and the transition out of the 'the machine is perfect' and when it hands you back the helm on a failure, an inability to trust the machine again.

The way to avoid entering this state, seems to stay deeply engaged in the inputs and decisions of the system (read 'automation should be like iron man, not like ultron') and have a deep understanding of the moving parts, critical design decisions of the system, and traces/visualization/checklist of the intermediate steps.

I don't know where the corpus of research about this is (probably in safety engineering research tomes), but it crystallized for me when comparing the crew reactions and behaviour of the Rio-Paris Air France crash, and the Quantas A380 accident in Singapour.

For the first one, amongst many, many other errors (be it crew management, taking account of the weather...) and problematic sensor behaviour, the transcript tells a harrowing story of a crew not trusting their aircraft anymore after recovering from a sensor failure (that failure ejecting them from autopilot and giving them back mostly full control), ignoring their training, and many of the actual alarms the aircraft was rightly giving, blaring at them.

In the second case, a crew that tries to piece out what capabilities they still have after a massive engine failure (explosion), wrecking most of the other systems with shrapnel. And keeping enough in the loop to decide when the overwhelmed system is giving wrong sensor instructions (transfering fuel from the unaffected reservoirs to actually destroyed, leaky ones).

Human factor studies are often fascinating.

[go to top]