Thanks! What type of monitoring were you looking for? We have some basic metrics now, but know we need to improve this. What metrics, alerting, observability are important for you?
1. Wait timings for jobs.
2. Run timings for jobs.
3. Timeout occurrences and stdout/stderr logs of those runs
4. Retry metrics, and if there is a retry limit, then metrics on jobs that were abandoned.
One thing that is easy to overlook is giving users the ability to define a specific “urgency” for their jobs which would allow for different alerting thresholds on things like running time or waiting.
Observability is super key for background work even more so since it's not always tied to a specific user action, so you need to have a trail to understand issues.
> One thing that is easy to overlook is giving users the ability to define a specific “urgency” for their jobs which would allow for different alerting thresholds on things like running time or waiting.
We are adding prioritization for functions soon so this is helpful for thinking about how to think about telemetry for different priority/urgent jobs.
re: timeouts - managing timeouts usually means managing dead-letter queues and our goal is to remove the need to think about DLQs at all and build metrics and smarter retry/replay logic right into the Inngest platform.
Agreed that alerting is important! We alert on job failures, plus we integrate with observability tools like Sentry.
For DLQs, you're right that they have value. We aren't killing DLQs but rather rethinking them with better ergonomics. Instead of having a dumping ground for unacked messages, we're developing a "replay" feature that lets you retry failed jobs over a period of time. Our planned replay feature will run failures in a separate queue, which can be cancelled at any time. The replay itself can be retried as well if there's still a problem