“ Ultimately, I want to see full session transcripts, but we don't have enough tool support for that broadly.”
I have a side project, git-prompt-story to attach Claude Vode session in GitHub git notes. Though it is not that simple to do automatic (e.g. i need to redact credentials).
Another idea is to simply promote the donation of AI credits instead of output tokens. It would be better to donate credits, not outputs, because people already working on the project would be better at prompting and steering AI outputs.
My latest attempt at this is https://github.com/simonw/claude-code-transcripts which produces output like the is: https://gisthost.github.io/?c75bf4d827ea4ee3c325625d24c6cd86...
The fact that some people will straight up lie after submitting you a PR with lots of _that type_ of comment in the middle of the code is baffling!
However:
> AI generated code does not substitute human thinking, testing, and clean up/rewrite.
Isn't that the end goal of these tools and companies producing them?
According to the marketing[1], the tools are already "smarter than people in many ways". If that is the case, what are these "ways", and why should we trust a human to do a better job at them? If these "ways" keep expanding, which most proponents of this technology believe will happen, then the end state is that the tools are smarter than people at everything, and we shouldn't trust humans to do anything.
Now, clearly, we're not there yet, but where the line is drawn today is extremely fuzzy, and mostly based on opinion. The wildly different narratives around this tech certainly don't help.
For those curious:
https://simonw.substack.com/p/a-new-way-to-extract-detailed-...
EDIT: I'm getting downvoted with no feedback, which is fine I guess, so I am just going to share some more colour on my opinion in case I am being misunderstood
What I meant with analogous to phishing is that the intent for the work is likely the one of personal reward and perhaps less of the desire to contribute. I was thinking they want their name on the contributors list, they want the credit, they want something and they don't want to put effort on it.
Do they deserve to be ridiculed for doing that? Maybe. However, I like to think humans deserve kindness sometimes. It's normal to want something, and I agree that it is not okay to be selfish and lazy about it (ignoring contribution rules and whatnot), so at minimum I think respect applies.
Some people are ignorant, naive, and are still maturing and growing. Bullying them may not help (thought it could) and mockery is a form of aggression.
I think some true false positives will fall into that category and pay the price for those who are truly ill intended.
Lastly, to ridicule is to care. To hate or attack requires caring about it. It requires effort, energy, and time from the maintainers. I think this just adds more waste and is more wasteful.
Maybe those wordings are there just to 'scare' people away and maintainers won't bother engaging, though I find it is just compounding the amount of garbage at this point and nobody benefits from it.
Anyways, would appreciate some feedback from those of you that seem to think otherwise.
Thanks!
PS: What I meant with ghostty should "ghost" them was this: https://en.wikipedia.org/wiki/Shadow_banning
I am seeing the doomed future of AI math: just received another set theory paper by a set theory amateur with an AI workflow and an interest in the continuum hypothesis.
At first glance, the paper looks polished and advanced. It is beautifully typeset and contains many correct definitions and theorems, many of which I recognize from my own published work and in work by people I know to be expert. Between those correct bits, however, are sprinkled whole passages of claims and results with new technical jargon. One can't really tell at first, but upon looking into it, it seems to be meaningless nonsense. The author has evidently hoodwinked himself.
We are all going to be suffering under this kind of garbage, which is not easily recognizable for the slop it is without effort. It is our regrettable fate.
Our evolving AI policy is in the same spirit as ghostty's, with more detail to address specific failure modes we've experienced: https://zulip.readthedocs.io/en/latest/contributing/contribu...