But is this actually true? They don't say that as far as I can tell, and it also doesn't compile for me nor their own CI it seems.
I guess probably at some point, something compiled, but cba to try to find that commit. I guess they should've left it in a better state before doing that blog post.
If you can't reproduce or compile the experiment then it really doesn't work at all and nothing but a hype piece.
It is also close to impossible run any node ecosystem without getting a wall of warnings.
You are an extreme outlier for putting in the work to fix all warnings
I do use AI heavily so I resorted to actually turning on warnings as errors in the rust codebases I work in.
Haven't found that myself, are you talking about TypeScript warnings perhaps? Because I'm mostly using just JavaScript and try to steer clear of TypeScript projects, and AFAIK, JavaScript the language nor runtimes don't really have warnings, except for deprecations, are those the ones you're talking about?
Product is still fairly beta, but in Sculptor[^1] we have an MCP that provides agent & human with suggestions along the lines of "the agent didn't actually integrate the new module" or "the agent didn't actually run the tests after writing them." It leads to some interesting observations & challenges - the agents still really like ignoring tool calls compared to human messages b/c they "know better" (and sometimes they do).