zlacker

Tldraw pauses external contributions due to AI slop

submitted by pranav+(OP) on 2026-01-15 23:37:42 | 192 points 107 comments
[view article] [source] [go to bottom]

NOTE: showing posts with links only show all posts
7. kanzur+Ef[view] [source] 2026-01-16 01:43:56
>>pranav+(OP)
That's interesting; another project stopped letting users directly open issues: >>46460319
◧◩
17. pella+cn[view] [source] [discussion] 2026-01-16 02:56:33
>>kanzur+Ef
Check Ghostty "CONTRIBUTING.md#ai-assistance-notice"

  "The Ghostty project allows AI-assisted code contributions, which must be properly disclosed in the pull request."
https://github.com/ghostty-org/ghostty/blob/main/CONTRIBUTIN...

Mitchell Hashimoto (2025-12-30): "Slop drives me crazy and it feels like 95+% of bug reports, but man, AI code analysis is getting really good. There are users out there reporting bugs that don't know ANYTHING about our stack, but are great AI drivers and producing some high quality issue reports.

This person (linked below) was experiencing Ghostty crashes and took it upon themselves to use AI to write a python script that can decode our crash files, match them up with our dsym files, and analyze the codebase for attempting to find the root cause, and extracted that into an Agent Skill.

They then came into Discord, warned us they don't know Zig at all, don't know macOS dev at all, don't know terminals at all, and that they used AI, but that they thought critically about the issues and believed they were real and asked if we'd accept them. I took a look at one, was impressed, and said send them all.

This fixed 4 real crashing cases that I was able to manually verify and write a fix for from someone who -- on paper -- had no fucking clue what they were talking about. And yet, they drove an AI with expert skill.

I want to call out that in addition to driving AI with expert skill, they navigated the terrain with expert skill as well. They didn't just toss slop up on our repo. They came to Discord as a human, reached out as a human, and talked to other humans about what they've done. They were careful and thoughtful about the process.

People like this give me hope for what is possible. But it really, really depends on high quality people like this. Most today -- to continue the analogy -- are unfortunately driving like a teenager who has only driven toy go-karts. Examples: https://github.com/ghostty-org/ghostty/discussions?discussio... " ( https://x.com/mitchellh/status/2006114026191769924 )

31. ironbo+aN[view] [source] 2026-01-16 07:48:15
>>pranav+(OP)
Curl project has had this issue https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s...
◧◩
37. stever+uR[view] [source] [discussion] 2026-01-16 08:35:07
>>exactl+Hf
I have a GitHub action that labels and tags issues automatically. It also standardizes the issue title. I love this script and would recommend it to anyone. https://github.com/tldraw/tldraw/blob/ce745d1ecc1236633d2bf6...
◧◩◪◨⬒
89. overfe+B92[view] [source] [discussion] 2026-01-16 17:46:06
>>sudden+YG
Another victory notch for the "AI Influentist" article[1].

Step 1: thought leader reveals Shocking(tm) AI achievement

Step 2: post gets traction

Step 3: additional context is revealed, dragging the original claim from the realm of the miraculous to "merely" useful.

I don't think Mitchell intentionally misrepresented/exaggerated, but the phenomenon is reccuring. What's the logical explanation for the frequency?

1. >>46623195

◧◩◪
94. jaunty+JG2[view] [source] [discussion] 2026-01-16 19:58:58
>>theshr+9W
At protocol (Bluesky) will I hope have better trust signals, since your Personal Data Server stores your microblog/posts and a bunch of other data. And the data is public. It's much harder to convincingly fake being a cross-media human.

If someone showed up on at-proto powered book review site like https://bookhive.buzz and started trying to post nonsense reviews, or started running bots, it would be much more transparent what was afoot.

More explicit trust signalling would be very fun to add.

[go to top]