The last straw for me was the few times I ran into issues, often due to my own mistakes, their support was nearly real-time and worked with me either help me solve the problem or dig in on their end to see where the issue was. Honestly more than anything the support gives me confidence to fully commit to this and use across all my production apps.
Anyway, great stuff all, you’ve built something awesome here.
Thanks! What type of monitoring were you looking for? We have some basic metrics now, but know we need to improve this. What metrics, alerting, observability are important for you?
1. Wait timings for jobs.
2. Run timings for jobs.
3. Timeout occurrences and stdout/stderr logs of those runs
4. Retry metrics, and if there is a retry limit, then metrics on jobs that were abandoned.
One thing that is easy to overlook is giving users the ability to define a specific “urgency” for their jobs which would allow for different alerting thresholds on things like running time or waiting.
Observability is super key for background work even more so since it's not always tied to a specific user action, so you need to have a trail to understand issues.
> One thing that is easy to overlook is giving users the ability to define a specific “urgency” for their jobs which would allow for different alerting thresholds on things like running time or waiting.
We are adding prioritization for functions soon so this is helpful for thinking about how to think about telemetry for different priority/urgent jobs.
re: timeouts - managing timeouts usually means managing dead-letter queues and our goal is to remove the need to think about DLQs at all and build metrics and smarter retry/replay logic right into the Inngest platform.