Other people apparently don't have this feeling at all. Maybe I shouldn't have been surprised by this, but I've definitely been caught off guard by it.
It's not necessarily maliciousness or laziness, it could simply be enthusiasm paired with lack of experience.
I’ll bet there are probably also people trying to farm accounts with plausible histories for things like anonymous supply chain attacks.
ever had a client second guess you by replying you a screenshot from GPT?
ever asked anything in a public group only to have a complete moron replying you with a screenshot from GPT or - at least a bit of effor there - a copy/paste of the wall of text?
no, people have no shame. they have a need for a little bit of (borrowed) self importance and validation.
Which is why i applaud every code of conduct that has public ridicule as punishment for wasting everybody's time
The client in your example isn't a (presumably) professional developer, submitting code to a public repository, inviting the scrutiny of fellow professionals and potential future clients or employers.
Memory leaks and issues with the memory allocator are months long process to pin on the JVM...
In the early days (bug parade times), the bugs are a lot more common, nowadays -- I'd say it'd be an extreme naivete to consider JVM the culprit from the get-go.
And this is one half of why I think
"Bad AI drivers will be [..] ridiculed in public."
isn't a good clause. The other is that ridiculing others, not matter what, is just no decent behavior. Putting it as a rule in your policy document makes it only worse.
I am not saying one has to lose their shame, but at best, understand it.
Tit for tat
i.e. imagine a change that is literally a small diff, that is easy to describe as a mere user and not a developer, and that requires quite a lot of deep understanding merely to submit as a PR (build the project! run the tests! write the template for the PR!).
Really a lot of this stuff ends up being a kind of failure mode of various projects that we all fall into at some point where "config" is in the code and what could be a simple change and test required a lot of friction.
Obviously not all submissions are going to be like this but I think I've tried a few little ones like that where I would normally just leave whatever annoyance I have alone but think "hey maybe it's 10 min faff with AI and a PR".
The structure of the project incentives kind of creates this. Increasing cost to contribution is a valid strategy of course, but from a holistic project point of view it is not always a good one especially assuming you are not dealing with adversarial contributors but only slightly incompetent ones.
Shaming people for violating valid social norms is absolutely decent behaviour. It is the primary mechanism we have to establish social norms. When people do bad things that are harmful to the rest of society, shaming them is society's first-level corrective response to get them to stop doing bad things. If people continue to violate norms, then society's higher levels of corrective behaviour can involve things like establishing laws and fining or imprisoning people, but you don't want to start with that level of response. Although putting these LLM spammers in jail does sound awfully enticing to me in a petty way, it's probably not the most constructive way to handle the problem.
The fact that shamelessness is taking over in some cultures is another problem altogether, and I don't know how you deal with that. Certain cultures have completely abdicated the ability to influence people's behaviour socially without resorting to heavy-handed intervention, and on the internet, this becomes everyone in the world's problem. I guess the answer is probably cultivation of spaces with strict moderation to bar shameless people from participating. The problem could be mitigated to some degree if a Github-like entity outright banned these people from their platform so they could not continue to harass open-source maintainers, but there is no platform like that. It unfortunately takes a lot of unrewarding work to maintain a curated social environment on the internet.
Just like with email spam I would expect that a big part of the issue is that it only takes a minority of shameless people to create a ton of contribution spam. Unlike email spam these people actually want their contributions to be tied to their personal reputation. Which in theory means that it should be easier to identify and isolate them.
Unless I have been reading very different science fiction I think it’s definitely not that.
I think it’s more the confidence and seeming plausibility of LLM answers
For those curious:
I think this is interesting too. I've noticed the difference in dating/hook-up contexts. The people you're talking about also end up getting laid more but that group also has a very large intersection with sex pests and other shitty people. The thing they have in common though is that they just don't care what other people think about them. That leads some of them to be successful if they are otherwise good people... or to become borderline or actual crininals if not. I find it fascinating actually, like how does this difference come about and can it actually be changed or is it something we get early in life or from the genetic lottery.
Too little or too much shame can lead to issue.
Problem is no one tells you what too little or too much actually is and there are many different situations where you need to figure it out on your own.
So I think sometimes people just get it wrong but ultimately everyone tries their best. Truly malicious shameless people are extremely rare in my experience.
For the topic at hand I think a lot of these “shameless” contributions come from kids
But (as someone else described), GPTs and other current-day LLMs are probabilistic. But 99% of what they produce seems feasible enough.
Basically teenagers. But it feels like the rebellious teenager phase lasts longer nowadays. Zero evidence besides vibes and anecdotes, but still.
Or maybe it's me that's getting old?
Of course, the vast majority of OS work is the same cog-in-a-machine work, and with low effort AI assisted contributions, the non-hero-coding work becomes more prevalent than ever.
They are sure they know better because they get a yes man doing their job for them.
I can't imagine the level of laziness or entitlement required for a student (or any developer) to blame their tools so quickly without conducting a thorough investigation.
So their boss may be naive, but not hilariously so - because that is, in fact, how the world works[1]! And as a boss, they probably have some understanding of it.
The thing they miss is that AI fundamentally[2] cannot provide this kind of "correct" output, and more importantly, that the "trillion dollar companies" not only don't guarantee that, they actually explicitly inform everyone everywhere, including in the UI, that the output may be incorrect.
So it's mostly failure to pay attention and realize they're dealing with an exception to the rule.
--
[0] - Actually hurt you, I'm ignoring all the fitness/healthy eating fads and "ultraprocessed food" bullshit.
[1] - On a related note, it's also something security people often don't get: real world security relies on being connected - via contracts and laws and institutions - to "men with guns". It's not perfect, but scales better.
[2] - Because LLMs are not databases, but - to a first-order approximation - little people on a chip!
Just like pain is a good thing, it tells you and signals to remove your hand from the stove.
What negative experience do you think should instead be created for people breaking these rules?
I raise an issue or PR after carefully reviewing someone else's open source code.
They ask Claude to answer me; neither them nor Claude understood the issue.
Well, at least it's their repo, they can do whatever.
The grift culture has changed that completely, now students face a lot of pressure to spam out PRs just to show they have contributed something.
it's easy to not have shame when you have no skin in the game... this is similar to how narcissists think so highly of themselves, it's never their fault
delicate feelers is like octopus arms
I am seeing the doomed future of AI math: just received another set theory paper by a set theory amateur with an AI workflow and an interest in the continuum hypothesis.
At first glance, the paper looks polished and advanced. It is beautifully typeset and contains many correct definitions and theorems, many of which I recognize from my own published work and in work by people I know to be expert. Between those correct bits, however, are sprinkled whole passages of claims and results with new technical jargon. One can't really tell at first, but upon looking into it, it seems to be meaningless nonsense. The author has evidently hoodwinked himself.
We are all going to be suffering under this kind of garbage, which is not easily recognizable for the slop it is without effort. It is our regrettable fate.
You on the other hand, have for many years honed your craft. The more you learn, the more you discover to learn aka , you realize how little you know. They don't have this. _At all_. They see this as a "free ticket to the front row" and when we politely push back (we should be way harsher in this, its the only language they understand) all they hear is "he doesn't like _me_." which is an escape.
You know how much work you ask of me, when you open a PR on my project, they don't. They will just see it as "why don't you let me join, since I have AI I should have the same skill as you".... unironically.
In other words, these "other people" that we talk about haven't worked a day in the field in their life, so they simply don't understand much of it, however they feel they understand everything of it.
Any smart interviewer knows that you have to look at actual code of the contributions to confirm it was actually accepted and that it was a non-trivial change (e.g. not updating punctuation in the README or something).
In my experience this is where the PR-spammers fall apart in interviews. When they proudly tell you they’re a contributor to a dozen popular projects and you ask for direct links to their contributions, they start coming up with excuses for why they can’t find them or their story changes.
There are of course lazy interviewers who will see the resume line about having contributed to popular projects and take it as strong signal without second guessing. That’s what these people are counting on.
Cybersecurity is also an exception here.
"men with guns" only work for cases where the criminal must be in the jurisdiction of the crime for the crime to have occurred.
If you rob a bank in London, you must be in London, and the British police can catch you. If you rob a bank somebody else, the British police doesn't care. If you hack a bank in London though, you may very well be in North Korea.
E.g.
"A random drunk guy on the subway suggested that this wouldn't be a problem if we were running the latest SOL server version" "Huh, I guess that's worth testing"
To demand public humiliation doesn’t just put you on the same level as our medieval ancestors, who responded to violations of social norms with the pillory - it’s actually even worse: the contemporary internet pillory never forgets.
A permanent public internet pillory isn’t just useless against the worst offenders, who are shameless anyway. It’s also permanently damaging to those who are still learning societal norms.
The Ghostty AI policy lacks any nuance in this regard. No consideration for the age or experience of the offender. No consideration for how serious the offense actually was.
What is written in the Ghostty AI policy lacks any nuance or generosity. It's more like a Grim Trigger strategy than Tit for Tat.
My guess is it's mostly people from countries with a culture that reward shameless behavior.
Two immediate ones I can think of:
- The yellow hue/sepia tone of any image coming out of ChatGPT
- People responding to text by starting with "Good Question!" or inserting hard-to-memorize-or-type unicode symbols like → into text where they obviously wouldn't have used that and have no history of using it.
That has NEVER led to a positive result in the whole of human history, especially that the second group is much larger than the first.
Shame is also not the same thing as "public humiliation". They are publicly humiliating themselves. Pointing out that what they publicly chose to do themselves is bad is in no way the same as coercing them into being humiliated, which is what "public humiliation as a medieval punishment" entails. For example, the medieval practice of dragging a woman through the streets nude in order to humiliate her is indeed abhorrent, but you can hardly complain if you march through the streets nude of your own volition, against other people's desires, and are then publicly shamed for it.
It is understanding of these dynamics that lead to us to our current system of law: punitive justice, but forgiveness through pardons.
An example I have of this is from high school where there were guys that were utterly shameless in asking girls for sex. The thing is it worked for them. Regardless of how many people turned them down they got enough of a hit rate it was an effective strategy. Simply put there was no other social mechanism that provided enough disincentive to stop them.
And to take the position as devil's advocate, why should they feel shame? Shame is typically a moral construct of the culture you're raised in and what to be ashamed for can vary widely.
For example, if your raised in the culture of Abrahamic religions it's very likely you're told to be ashamed for being gay. Whereas non-religious upbringing is more likely to say why the hell would you be ashamed for being gay.
TL:DR, shame is not an effective mechanism on the internet because you're dealing with far too many cultures that have wildly different views on shame, and any particular viewpoint on shame is apt to have millions to billions of people that don't believe the same.
We are currently facing a political climate trying to tear many of these safeguards down. Some people really think "caveat emptor" is some kind of natural, efficient, ideal way of life.
So many people now respond to "You shouldn't do that..." with one or more of:
- But, I'm allowed to.
- But, it's legal.
- But, the rules don't say I can't.
- But, nobody is stopping me.
The shared cultural understanding of right and wrong is shrinking. More and more, there's just can and can't.
The humility of understanding what you don't know and the limitations of that is out the window for many people now. I see time and time again the idea that "expertise is dead". Yet it's crystal clear it's not. But those people cannot understand why.
It all boils down to a simple reality: you can't understand why something is fundamentally bad if you don't understand it at all.
You can expand this sentiment to everyday life. The things some people are willing to say and do in public is a never ending supply of surprising.
My guess is that those people have different incentives. They need to build a portfolio of open-source contributions, so shame is not of their concern. So, yeah, where you stand depends on where you sit.
There's so much CYA because there is an A that needs C'ing
Fwiw I haven’t noticed either phenomenon much irl but that might just be my bubble.
Think of a lot of the inflammatory content on social media, how people have made whole careers and fortunes over outrage, and they have no shame over it.
It really does begin to look like having a good sense of shame isn't rewarded in the same way.
But that care isn't even evident here. People submitting prs that don't even compile, bug reports for issues that may not even exist. The minimum I'd expect is to check the work of whatever you vibe coded. We can't even get that. It's some. Odd form of clout chasing as if repos are a factor of success, not what you contribute to them.
But yes, look at the US c.2025-6. As long as the leader sounds assertive, some people will eat the blatant lies that can be disproven even by the same AI tools they laud.
Maybe a million dollar company needs to be compliant. A billion dollar company can start to ward off any loopholes with lawsuits instead of compliance.
A trillion dollar company will simply change the law and fight governments over the law to begin with, rather than worrying about compliance.
I've been deep-diving into AI code generation for more niche platforms, to see if it can either fill the coding gap in my skillset, or help me learn more code. And without writing my whole blog post(s) here, it's been fairly mediocre but improving over time.
But for the life of me I would never submit PRs of this code. Not if I can't explain every line and why it's there. And in preparation of publishing anything to my own repos I have a readme which explicitly states how the code was generated and requesting not to bother any upstream or community members with issues from it. It's just (uncommon) courtesy, no?
Many artists through the ages have learned to work in various mediums, like sculpture of materials, oil painting, watercolors, fresco or whatever. There are myriad ways to express your visual art using physical materials.
Likewise, a girlfriend of mine was a college-educated artist, and she had some great output in all sorts of media, and had a great grasp of paints, and paper and canvas and what-have-you.
But she was also an Amiga aficionado, and then worked on the PCs I had, and ultimately the item she wanted most in life was a Wacom Tablet. This tablet was a force-multiplier for her art, and allowed her some real creative freedom to work in digital mediums and create art with ease that was unheard-of for messy oil paintings or whatever on canvas in your garage (we actually lived in a converted garage anyway.)
So, digital art was her saving grace, but also a significant leveler of playing fields. What would distinguish her original creativity from A.I.-generated stuff later on? Really not much. You could still make an oil or watercolor painting that is obviously hand-made. Forgeries of great artists have been perpetrated, but most of us can't explain, e.g. the Shroud of Turin anyway.
So generative A.I. is competing in these digital mediums, and perhaps 3D-printing is competing in the realm of physical objects, but it's unfortunate for artists that their choices have narrowed so far, that they are practically required to work in digital media exclusively, and master those apps, and therefore, they compete with gen A.I. in the virtual realm. That's just how it's gonna be, until folks go back to sculpting marble and painting soup cans.
And here's your response to what felt like a pretty good faith response that deserved at most an equally earnest answer, and at worst no response.
Instead they got worse than no response lol.
> All while being completely ignorant to the medium or the process.
also ignorant that the art they generated was made possible by those people who "wasted their time"...Still, I meant that in the other direction: not request, but a gift/favor. "Guess culture" would be going out of your way to make the gift valuable for the receiver - matching what they need, and not generating extra burden. "Ask culture" would be like doing whatever's easiest that matches the explicit requirements, and throwing it over the fence.
It's basically like GenAI, but running on protein substrate instead of silicon one.
And even in the digital realm, artists already spent the last decade+ competing with equivalent "factory art", too. Advertising stands on art, and most of that isn't commissioned, it's rented or bought for cheap from stock art providers, and a lot of supply there comes from people and organizations who specialize in producing art for them. The OG slop art, before AI.
EDIT: there's some irony here, in that people like to talk about how GenAI might (or might already be) start putting artists out of work. But I haven't seen anyone mention that the AI has already put slop creators out of work.
I see plenty of nuance beyond the bold print. They clearly say they love to help junior developers. Your assumption that they will apply this without thought is, well, your assumption. I'd rather see what they actually do instead of getting wrapped up in your fantasies.
This being a big part of the problem-- their false answers are more plausible and convincing then the truth. The output almost always seems feasible-- true or not is an entirely different matter.
Historically when most things fail they produce nonsense. If they do they are producing something related to the truth (but perhaps biased or mis-calibrated). LLM output can be both highly plausible and unrelated to reality.