zlacker

AI Usage Policy

submitted by mefeng+(OP) on 2026-01-23 09:50:26 | 502 points 252 comments
[view article] [source] [go to bottom]

NOTE: showing posts with links only show all posts
2. jakoza+68[view] [source] 2026-01-23 10:56:50
>>mefeng+(OP)
See x thread for rationale: https://x.com/mitchellh/status/2014433315261124760?s=46&t=FU...

“ Ultimately, I want to see full session transcripts, but we don't have enough tool support for that broadly.”

I have a side project, git-prompt-story to attach Claude Vode session in GitHub git notes. Though it is not that simple to do automatic (e.g. i need to redact credentials).

9. kanzur+Fa[view] [source] 2026-01-23 11:19:57
>>mefeng+(OP)
Another project simply paused external contributions entirely: >>46642012

Another idea is to simply promote the donation of AI credits instead of output tokens. It would be better to donate credits, not outputs, because people already working on the project would be better at prompting and steering AI outputs.

◧◩◪
11. simonw+3b[view] [source] [discussion] 2026-01-23 11:23:03
>>radars+s8
For me it's increasingly the work. I spend more time in Claude Code going back and forth with the agent than I do in my text editor hacking on the code by hand. Those transcripts ARE the work I've been doing. I want to save them in the same way that I archive my notes and issues and other ephemera around my projects.

My latest attempt at this is https://github.com/simonw/claude-code-transcripts which produces output like the is: https://gisthost.github.io/?c75bf4d827ea4ee3c325625d24c6cd86...

13. CrociD+rb[view] [source] 2026-01-23 11:26:10
>>mefeng+(OP)
I recently had to do a similar policy for my TUI feed reader, after getting some AI slop spammy PRs: https://github.com/CrociDB/bulletty?tab=contributing-ov-file...

The fact that some people will straight up lie after submitting you a PR with lots of _that type_ of comment in the middle of the code is baffling!

◧◩
20. imiric+xc[view] [source] [discussion] 2026-01-23 11:35:18
>>arjunb+X8
I agree with you on the policy being balanced.

However:

> AI generated code does not substitute human thinking, testing, and clean up/rewrite.

Isn't that the end goal of these tools and companies producing them?

According to the marketing[1], the tools are already "smarter than people in many ways". If that is the case, what are these "ways", and why should we trust a human to do a better job at them? If these "ways" keep expanding, which most proponents of this technology believe will happen, then the end state is that the tools are smarter than people at everything, and we shouldn't trust humans to do anything.

Now, clearly, we're not there yet, but where the line is drawn today is extremely fuzzy, and mostly based on opinion. The wildly different narratives around this tech certainly don't help.

[1]: https://blog.samaltman.com/the-gentle-singularity

◧◩◪
65. latexr+yk[view] [source] [discussion] 2026-01-23 12:40:06
>>Ethery+ec
> To quote Bo Burnham, "you think your dick is a gift, I promise it's not".

For those curious:

https://www.youtube.com/watch?v=llGvsgN17CQ

◧◩◪
94. steven+1q[view] [source] [discussion] 2026-01-23 13:16:04
>>optima+Fb
simow wrote a tool that does this for Claude code

https://simonw.substack.com/p/a-new-way-to-extract-detailed-...

102. hmokig+et[view] [source] 2026-01-23 13:35:06
>>mefeng+(OP)
Banned I understand but ridiculed? I would say that these bad drive by spammers are analogous to phishing emails. Do you engage with those? Are they worth any energy or effort from you? I think ghostty should just ghost them :)

EDIT: I'm getting downvoted with no feedback, which is fine I guess, so I am just going to share some more colour on my opinion in case I am being misunderstood

What I meant with analogous to phishing is that the intent for the work is likely the one of personal reward and perhaps less of the desire to contribute. I was thinking they want their name on the contributors list, they want the credit, they want something and they don't want to put effort on it.

Do they deserve to be ridiculed for doing that? Maybe. However, I like to think humans deserve kindness sometimes. It's normal to want something, and I agree that it is not okay to be selfish and lazy about it (ignoring contribution rules and whatnot), so at minimum I think respect applies.

Some people are ignorant, naive, and are still maturing and growing. Bullying them may not help (thought it could) and mockery is a form of aggression.

I think some true false positives will fall into that category and pay the price for those who are truly ill intended.

Lastly, to ridicule is to care. To hate or attack requires caring about it. It requires effort, energy, and time from the maintainers. I think this just adds more waste and is more wasteful.

Maybe those wordings are there just to 'scare' people away and maintainers won't bother engaging, though I find it is just compounding the amount of garbage at this point and nobody benefits from it.

Anyways, would appreciate some feedback from those of you that seem to think otherwise.

Thanks!

PS: What I meant with ghostty should "ghost" them was this: https://en.wikipedia.org/wiki/Shadow_banning

◧◩◪
131. slfref+Az[view] [source] [discussion] 2026-01-23 14:13:00
>>quanwi+Ox
https://x.com/JDHamkins/status/2014085911110131987

I am seeing the doomed future of AI math: just received another set theory paper by a set theory amateur with an AI workflow and an interest in the continuum hypothesis.

At first glance, the paper looks polished and advanced. It is beautifully typeset and contains many correct definitions and theorems, many of which I recognize from my own published work and in work by people I know to be expert. Between those correct bits, however, are sprinkled whole passages of claims and results with new technical jargon. One can't really tell at first, but upon looking into it, it seems to be meaningless nonsense. The author has evidently hoodwinked himself.

We are all going to be suffering under this kind of garbage, which is not easily recognizable for the slop it is without effort. It is our regrettable fate.

◧◩◪◨
148. zehaev+bP[view] [source] [discussion] 2026-01-23 15:31:35
>>benldr+RC
Off topic, but I feel like this could be made into a Zen Koan from The Codeless Code[0]. You're almost there with it!

[0] https://thecodelesscode.com/

186. alya+mc1[view] [source] 2026-01-23 17:24:44
>>mefeng+(OP)
At the Zulip open-source project, we've had a significant onslaught of AI slop in the past few months. It gets as absurd as PR descriptions with AI-generated "screenshots" of the app to "demonstrate" the changes. We've had to start warning contributors that we won't be able to review their work if they continue misusing AI, and occasionally banning repeat offenders. It's feels draining -- we want to spend our time mentoring people who'll actually learn from feedback, not interacting with contributors who are just copy-pasting LLM responses without thought.

Our evolving AI policy is in the same spirit as ghostty's, with more detail to address specific failure modes we've experienced: https://zulip.readthedocs.io/en/latest/contributing/contribu...

[go to top]