If it's someone else's project, they have full authority to decide what is and isn't an issue. With large enough projects, you're going to have enough bad actors, people who don't read error messages, and just downright crazy people. Throw in people using AI for dubious purposes like CVE inflation, and it's even worse.
It's simply a great idea. The mindset should be 'understand what's happening', not 'this is the software's fault'.
The discussion area also serves as a convenient explanation/exploration of the surrounding issues that is easy to find. It reduces the maintainer's workload and should be the default.
> Yeah but a good issue tracker should be able to help you filter that stuff out.
Agreed. This highlights GitHub's issue management system being inadequate.
(Note: I'm the creator/lead of Ghostty)
Downside is that "Facebookization" created a trend where people expect everything to be obvious and achievable in minimal amount of clicks, without configuring anything.
Now "LLMization" will push the trend forward. If I can make a video with Sora by typing what I want in the box, why would I need to click around or type some arcane configuration for a tool?
I don't think in general it is bad - it is only bad for specialist software where you cannot use that software without deeper understanding, but the expectation is still there.
Unfortunately there is no such magic bullet for trawling through bug reports from users, but pushing more work out to the reporter can be reasonably effective at avoiding that kind of time wasting. Require that the reporters communicate responsively, that they test things promptly, that they provide reproducers and exact recipes for reproduction. Ask that they run git bisect / creduce / debug options / etc. Proactively close out bugs or mark them appropriately if reporters don't do the work.
IME, people cannot even articulate what they want when the know what they want, let alone when they don’t even understand what they want in the first place.
As far as I'm aware, most large open GitHub projects use tags for that kind of classification. Would you consider that too clunky?
It all stems from the fact that all issues are in this one large pool rather than there being a completely separate list with already vetted stuff that nobody else can write into.
it is a UI designed to be hard to use
Well, that’s a paraphrase, but I remember reading that rough idea on their blog years ago, and it strikes me as perfectly fine for many kinds of projects.
1) UI = a clearly documented way to configure all features and make the software work exactly how you want.
2) UI = load a web page and try to do the thing you wanted to do (in this case communicate with some specific people).
FB is clearly terrible at 1 but pretty alright at 2.
Then people expect accounting software to be just login click one or two buttons.
Especially with the new features added last year (parent tickets, better boolean search etc) although I'm not sure if you need to opt in to get those.
In fact, it's become our primary issue tracker at work.
Absolutely. It's a patch that can achieve a similar result, but it's a patch indeed. A major features of every ticketing system, if not "the" major feature, is the ticket flow. Which should be opinionated. Customizable with the owner's opinion, but opinionated nonetheless. Using labels to cover missing areas in that flow is a clunky patch, in my book.
Speaking for another large open GitHub project:
Absofuckinglutely yes.
I cannot overstate how bad this workflow is. There seems to be a development now in other platforms becoming more popular (gitlab, forgejo/codeberg, etc.) and I hope to god that it either forces GitHub to improve this pile of expletive or makes these "alternate" platforms not be so alternate anymore so we can move off.
With sufficient thrust, pigs fly just fine. However, this is
not necessarily a good idea. It is hard to be sure where they
are going to land, and it could be dangerous sitting under them
as they fly overhead.
Translation: sure, you can make this work by piling automation on top. But that doesn't make it a good system to begin with, and won't really result in a robust result either. I'd really rather have a better foundation to start with.I guess it probably leads to higher quality issue descriptions at least, but otherwise this seems pretty dumb and user-hostile.
All of this is possible on GitHub issues and is in fact done by many projects, by this metric I dont see how GitHub Issues is any different than say, JIRA. In both cases, as you mentioned, someone needs to triage those issues, which would, of course, be the developers as well. Nothing gained, nothing lost.
On repos I maintain, I use an “untriaged” label for issues and I convert questions to discussions at issue triage time.
One of my pet peeves that I will never understand.
I do not expect users to understand what an error means, but I absolutely expect them to tell me what the error says. I try to understand things from the perspective of a non-technical user, but I cannot fathom why even a non-technical user would think that they don't need to include the contents of an error message when seeking help regarding the error. Instead, it's "When I do X, I get an error".
Maybe I have too much faith in people. I've seen even software engineers become absolutely blind when dealing with errors. I had a time 10 years ago as a tester when I filed a bug ticket with explicit steps that results in a "broken pipe error". The engineer closed the ticket as "Can Not Reproduce" with a comment saying "I can't complete your steps because I'm getting a 'broken pipe error'".
No, my mom is not eidetic, and no, she's not going to upload a photo of her living room.
Totally agree with you, though, when the full error message is at least capable of being copied to the clipboard.
I'm not sure I agree.
Reason ?
The old adage "handle errors gracefully".
The "gracefully" part, by definition means taking into account the UX.
Ergo "gracefully" does not mean spitting out either (a) a meaningless generic message or (b) A bunch of incomprehensible tech-speak.
Your error should provide (a) a user-friendly plain-English description and (b) an error ID that you can then cross-reference (e.g. you know "error 42" means the database connection is foobar because the password is wrong)
During your support interaction you can then guide the user through uploading logs or whatever. Preferably through an "upload to support" button you've already carefully coded into your app.
Even if your app is targetting a techie audience, its the same ethos.
If there is a possibility a techie could solve the problem themselves (e.g. by RTFM or checking the config file), then the onus is on you to provide a suitably meaningful error message to help them on their troubleshooting journey.
He even checked "thing A" and "thing B" which "looked fine", but it still "didn't work". A and B had absolutely nothing to do with each either (they solve completely different problems).
I had to ask multiple times what exactly he was trying to do and what exactly he was experiencing.
I've even had "web devs" shout there must be some kind of "network problem" between their workstation and some web server, because they were getting an http 403 error.
So, yeah. Regular users? I honestly have 0 expectations from them. They just observe that the software doesn't do what they expect and they'll complain.
In Azure "private networking", many components still have a public IP and public dns record associated with the hostname of the given service, which clients may try to connect to if they aren't set up right.
That IP will respond with a 403 error if they try to connect to it. So Azure is indirectly training people that 403 potentially IS a "network issue"... (like their laptop is not connected to VPN, or Private DNS isn't set up right, or traffic isn't being routed correctly or some such).
Yeah, I get that's just plain silly, but it's IAAS/SAAS magic cloud abstraction and that's just the way Microsoft does things.
The rebuke to your comment is right in your comment: "other ticket systems do this by…"
The ticket system does it. As in, it has it built-in and/or well integrated. If GitHub had the same level of integration that other ticket systems achieve with their automation, this'd be a non-issue. But it doesn't, and it's a huge problem.
P.S.: I hate to break it to you, but "I hate to break it to you, but" is quite poor form.
20 years ago, I worked the self-checkout registers in retail. I'd have people scan an item (With the obvious audible "BEEP"), and then stand there confused about what to do next. The machine is telling them "Please place the item in the bag" and they'd tell me they don't know what to do. I'd say "What's the machine telling you?" "'Please place the item in the bag'" "Okay, then place the item in the bag" "Oh, okay"
It's like they don't understand words if a computer is saying them. But if they're coming from a human, they understand just fine, even if it's the exact same words.
"Incorrect password. You may have made a mistake entering it. Please try entering it again." "I don't know what that means, I'm going to call up tech support and just say I'm getting an error when I try to log in."
P.S. I didn't ask
I see this pretty often. These aren't even what should be called typical users in theory. They are people doing a technical job and were hired with technical requirements, an application will spit out a well written error message in the domain they should be professionals in and their brain turns off. And ya, it ends up in a call to me where I state the same thing and they figure the problem out.
I really don't get it.
In that case (and even sometimes in the more "graceful" cases), we don't always expect the user to know what an error message means.
Is not math, logic or any of that asides. Is the actual ability to read, exactly, without adding or removing anything.
You are not describing a network issue. You're sending requests that by design the origin servers refuse to authorize. This is basic HTTP.
https://datatracker.ietf.org/doc/html/rfc7231#page-59
The origin servers could also return 404 in this usecase, but 403 is more informative and easier to troubleshoot, because it means "yeah your request to this resource could be good but it's failing some precondition".
Worse still, just “it doesn’t work” without even any steps.
I sometimes gave those users an analogy like going to the doctor or a mechanic and not providing enough information, but I don’t think it worked.
I've seen this with gnss-assisted driving, or with automated driving, or with aircraft autopilot. Something disengages, gives unwarranted trust, we lose context, training fades ; and when thrown back in control, the avalanche of context and responsibility is overwhelming, compounded by the lack of context about the previous intermediate steps.
One of the most worrying dangers of automation, is this trust (even by supposed knowledgeable technicians) and the transition out of the 'the machine is perfect' and when it hands you back the helm on a failure, an inability to trust the machine again.
The way to avoid entering this state, seems to stay deeply engaged in the inputs and decisions of the system (read 'automation should be like iron man, not like ultron') and have a deep understanding of the moving parts, critical design decisions of the system, and traces/visualization/checklist of the intermediate steps.
I don't know where the corpus of research about this is (probably in safety engineering research tomes), but it crystallized for me when comparing the crew reactions and behaviour of the Rio-Paris Air France crash, and the Quantas A380 accident in Singapour.
For the first one, amongst many, many other errors (be it crew management, taking account of the weather...) and problematic sensor behaviour, the transcript tells a harrowing story of a crew not trusting their aircraft anymore after recovering from a sensor failure (that failure ejecting them from autopilot and giving them back mostly full control), ignoring their training, and many of the actual alarms the aircraft was rightly giving, blaring at them.
In the second case, a crew that tries to piece out what capabilities they still have after a massive engine failure (explosion), wrecking most of the other systems with shrapnel. And keeping enough in the loop to decide when the overwhelmed system is giving wrong sensor instructions (transfering fuel from the unaffected reservoirs to actually destroyed, leaky ones).
Human factor studies are often fascinating.
Even when error message was clearly understandable for my expertise, it took surprisingly long tome to switch from one mental activity - "Pay bills", to another - "Investigate technical problem". And you have to throw away all short memory to switch into another task. So all rumors about "stupid" users is direct consequence from how human mind works.
Patient: My foot hurts.
Wife: Which part of it?
Patient: It all hurts.
Wife: Does your heel hurt?
Patient: No.
Wife: Does your arch hurt?
Patient: No.
Wife: Do your toes hurt?
Patient: This one does.
Wife: Does anything but that one toe hurt?
Patient: No.
Wife: puts on a brave smile
Could the manufacturer solve this in a better way? Probably but that won't solve the issue the customer has now.
99% of the population have no idea what "Header size exceeded" means, so it absolutely is about understanding the message, if the devs expect people to read the error.
Also arguably the users are kind of right. An error indicates that a program has violated its invariants, which may lead to undefined behavior. Any output from a program after entering the realm of undefined behavior SHOULD be mistrusted, including error messages.
Jokes aside, "upload a photo of her living room" was meant to highlight the ridiculousness of the UX. I believe the designer of that flow had an OKR to decrease the number of reported bugs.
When debugging stuff with the devs at our work, I tend to overexplain as much as I can, because often there’s some deep link between systems that I don’t understand, but they do.
I’m a pretty firm believer in “no stupid questions (or comments)”, because often going in a strange direction that the devs assure me isn’t the problem, actually turns out to be the problem (maybe thing A actually has some connection to thing B in a very abstract way!).
I think just serving a different perspective or theory can help us all solve the problem faster, so sometimes it’s worth to pull that thread, even if it seems worthless in the moment.
Maybe I’m just lucky that my engineering colleagues are very patient with me (and maybe less lucky that some of our systems are so deeply intertwined), but I do hope they have more than zero expectations from me, as we mean well and just want to support where we can, knowing full well that ya’ll are leagues ahead in the smarts department.
But I WOULD expect the user, when sending a message to support, to say they're getting a "Header size exceeded" error, rather than just say "an error".
That's just a stupid limitation and not even a technical one. You could happily send GBs over email. You can also easily filter allowed attachment size by sender on the recipient side, because by the time the attachment size is told, both information was already provided.
Commenting on things is from a list of features (to be distinguished from UX/UI) I talked about.
I have given instructions to repeat, but more slowly, and people will still click through errors without a chance to read. I have asked people to go step by step and pause after every step so we can look at what's going on, and they will treat "do thing and close resulting error" as a single step, pausing only after having closed the error.
The only explanation I have that I can understand is that closing errors and popups is a reflex for many people, such that they don't even register doing it. I don't know if this is true or if people would agree with it.
I've seen this with programmers at all levels of seniority. I've seen it with technically capable non-programmers. I've seen it with non-technical people just trying to use some piece of software.
The only thing that's ever been effective for me is to coach people to copy all text and take screenshots of literally everything that is happening on their screen (too many narrow screenshots that obscure useful context, so I ask for whole-screen screenshots only). Some people do well with this. Some never seem to put any effort into the communication.