With Hatchet, the starting point is a single function call that gets enqueued according to a configuration you've set to respective different fairness and concurrency constraints. Durable workflows can be built on top of that, but the entire platform should feel intuitive and familiar to anyone working in the codebase.
* Inngest is fully event driven, with replays, fan-outs, `step.waitForEvent` to automatically pause and resume durable functions when specific events are received, declarative cancellation based off of events, etc.
* We have real-time metrics, tracing, etc. out of the box in our UI
* Out of the box support for TS, Python, Golang, Java. We're also interchangeable with zero-downtime language and cloud migrations
* I don't know Hatchet's local dev story, but it's a one-liner for us
* Batching, to turn eg. 100 events into a single execution
* Concurrency, throttling, rate limiting, and debouncing, built in and operate at a function level
* Support for your own multi-tenancy keys, allowing you to create queues and set concurrency limits for your own concurrency
* Works serverless, servers, or anywhere
* And, specifically, it's all procedural and doesn't have to be a DAG.
We've also invested heavily in flow control — the aspects of batching, concurrency, custom multi-tenancy controls, etc. are all things that you have to layer over other systems.
I expect because we've been around for a couple of years that newer folks like Hatchet end up trying to replicate some of what we've done, though building this takes quite some time. Either way, happy to see our API and approach start to spread :)
1. Hatchet is MIT licensed and designed to be self-hosted in production, with cloud as an alternative. While the Inngest dev server is open source, it doesn't support self-hosting: https://www.inngest.com/docs/self-hosting.
2. Inngest is built on an HTTP webhook model while Hatchet is built on a long-lived, client-initiated gRPC connection. While we support HTTP webhooks for serverless environments, a core part of the Hatchet platform is built to display the health of a long-lived worker and provide worker-level metrics that can be used for autoscaling. All async runtimes that we've worked on in the past have eventually migrated off of serverless for a number of reasons, like reducing latency or having more control over things like runtime environment and DB connections. AFIAK the concept of a worker or worker health doesn't exist in Inngest.
There are the finer details which we can hash out in the other thread, but both products rely on events, tasks and durable workflows as core concepts, and there's a lot of overlap.
One of our key aspects is reliability. We were apprehensive of officially supporting self hosting with awkward queue and state store migrations until you could "Set it and forget it". Otherwise, you're almost certainly going to be many versions behind with a very tedious upgrade path.
So, if you're a cowboy, totally self hostable. If you're not (which makes sense — you're using durable execution), check back in a short amount of time :)
Hatchet is also event driven [1], has built-in support for tracing and metrics, and has a TS [2], Python [3] and Golang SDK [4], has support for throttling and rate limiting [5], concurrency with custom multi-tenancy keys [6], works on serverless [7], and supports procedural workflows [8].
That said, there are certainly lots of things to work on. Batching and better tracing are on our roadmap. And while we don’t have a Java SDK, we do have a Github discussion for future SDKs that you can vote on here: https://github.com/hatchet-dev/hatchet/discussions/436.
[1] https://docs.hatchet.run/home/features/triggering-runs/event...
[2] https://docs.hatchet.run/sdks/typescript-sdk
[3] https://docs.hatchet.run/sdks/python-sdk
[4] https://docs.hatchet.run/sdks/go-sdk
[5] https://docs.hatchet.run/home/features/rate-limits
[6] https://docs.hatchet.run/home/features/concurrency/round-rob...
https://x.com/mitchellh/status/1759626842817069290?s=46&t=57...