We’re building software to automatically fix technical debt. We do this by combining expressive static analysis tooling (we’ve built a developer-friendly query language for code) with machine learning to automatically migrate code to new patterns and platforms. We’re in private beta, but have deployed thousands of successful changes for initial customers and have raised a large seed round from top investors (Founders Fund, 8VC, Abstract Ventures). We’ve solving challenges at the intersection of machine learning and programming languages using Scala, Rust, TypeScript, and LLMs. We’re hiring for both smart generalists and PL/ML experts who are interested in collaborating on problems like automatically inserting new types into codebases and using unit tests for self-supervised learning.
Here are a few reasons you might be interested in applying:
- We’re working on the edge of the possible and doing deep technical work in parsers/language design and machine learning.
- We mostly work in-person in New York, but remote is also possible for outstanding candidates who have proven success in a remote model.
- I’m personally committed to giving quick feedback to everyone who applies.
Find out more at https://www.grit.io/careers or email morgante@grit.io.
I hope that it can create tests for the code that changes to ensure that it retains the same outputs (including side effects) for the same inputs.
But I have to ask: how do you define tech debt? What about these scenarios:
1) comments above the code say “do not touch, here be dragons” ;)
2) code change velocity is currently zero, team used to care and wanted to change to something modern, no one cares anymore.
3) above but with good testing
4) active code velocity at moderate level with ok tests but old/bad patterns, and is an interchange for various parts of the system
5) high velocity code change, no tests, known bug utopia, zillion a/b tests, marketing wants changes hourly. 8)