I don't say it doesn't "work" or serves a purpose - but well i read so much about this beein an "actual intelligence" and stuff that i had to look into the source.
As someone who spends actually a definately to big portion of his free time researching thought process replication and related topics in the realm of "AI" this is not really more "ai" than any other so far.
Just my 3 cents.
So far everything has been reactive. You need to engage a prompt, you need to ask Siri or ask claude to do something. It can be very powerful once prompted, but it still requires prompting.
You always need to ask. Having something always waiting in the background that can proactively take actions and get your attention is a genuine game-changer.
Whether this particular project delivers on that promise I don't know, but I wouldn't write off "getting proactivity right" as the next big thing just because under the hood it's agents and LLMs.
In order for this to be “safe” you’re gonna want to confirm what the agent is deciding needs to be done proactively. Do you feel like acknowledging prompts all the time? “Just authorize it to always do certain things without acknowledgement”, I’m sure you’re thinking. Do you feel comfortable allowing that, knowing what we know about it the non-deterministic nature of AI, prompt injection, etc.?