I set up spec-kit first, then updated its templates to tell it to use beads to track features and all that instead of writing markdown files. If nothing else, this is a quality-of-life improvement for me, because recent LLMs seem to have an intense penchant to try to write one or more markdown files per large task. Ending up with loads of markdown poop feels like the new `.DS_Store`, but harder to `.gitignore` because they'll name files whatever floats their boat.
https://github.com/steveyegge/beads/blob/main/.beads/issues....
Here's that file opened in Datasette Lite which makes it easier to read and adds filters for things like issue type and status:
https://lite.datasette.io/?json=https://github.com/steveyegg...
Set the TASKDATA to `./.task/`. Then tell the agents to use the task CLI.
The benefit is most LLMs already understand Taskwarrior. They've never heard of Beads.
[1] https://github.com/steveyegge/beads/blob/main/docs/PROTECTED...
[1] Demo with Claude - https://pradeeproark.github.io/pensieve/demos/
[2] Article about it - https://pradeeproark.com/posts/agentic-scratch-memory-using-...
[3] https://github.com/cittamaya/cittamaya - Claude Code Skills Marketplace for Pensieve
> I appreciate that this is a very new project, but what’s missing is an architectural overview of the data model.
Response:
You're right to call me out on this. :)
Then I check the latest commit on architecture.md, which looks like a total rewrite in response to a beads.jsonl issue logged for this.
> JSONL for git: One entity per line means git diffs are readable and merges usually succeed automatically.
Hmm, ok. So readme says:
> .beads/beads.jsonl - Issue data in JSONL format (source of truth, synced via git)
But the beads.jsonl for that commit to fix architecture.md still has the issue to fix architecture.md in the beads.jsonl? So I wonder does that get line get removed now that it's fixed ... so I check master, but now beads.jsonl is gone?
But the readme still references beads.jsonl as source of truth? But there is no beads.jsonl in the dogfooded repo, and there's like ~hundreds of commits in the past few days, so I'm not clear how I'm supposed to understand what's going on with the repo. beads.jsonl is the spoon, but there is no spoon.
I'll check back later, or have my beads-superpowered agent check back for me. Agents report that they enjoy this.
https://github.com/steveyegge/beads/issues/376#issuecomment-...
https://github.com/steveyegge/beads/commit/c3e4172be7b97effa...
Neither does OpenAI's Codex CLI - you can confirm that by looking at the source code https://github.com/openai/codex
Cursor and Windsurf both use semantic search via embeddings.
You can get semantic search in Claude Code using this unofficial plugin: https://github.com/zilliztech/claude-context - it's built by and uses a managed vector database called Zilliz Cloud.
I finally started digging in to OpenCode for real these past couple weeks. It has a planning mode, which nicely builds out a plan on text chat as usual, but also a right pain on the TUI builds out a Todo list, which has been really nice. I often give it the go-ahead to do the next item or two or three. I've wondered how this is implemented, how OpenCode sets up and picks up on this structuring.
Beads formalizing that a bit more is tempting. I also deeply deeply enjoy that Beads is checked in. With both Aider and OpenCode, there's a nice history. But it's typically not checked in. OpenCode 's history in particular isnt even kept in the project directory, and can be quite complex with multiple sessions and multiple agents all flying around. Beads, as a strategy to record the work & understand it better, is also very tempting.
Would love to see deeper OpenCode + Beads integration.