The LLM still benefits from the abstraction provided by Python (fewer tokens and less cognitive load). I could see a pipeline working where one model writes in Python or so, then another model is tasked to compile it into a more performant language
At that point, the legibility and prevalence of humans who can read the code becomes almost more important than which language the machine "prefers."
The future belongs to generalists!
- Libraries don't necessarily map one-to-one from Python to Rust/etc.
- Paradigms don't map neatly; Python is OO, Rust leans more towards FP.
- Even if the code be re-written in Rust, it's probably not the most Rustic (?) approach or the most performant.
Also, what happens when bug fixes are needed? Again first in Py and then in Rs?
Couldn't be more correct.
The experienced generalists with techniques of verification testing are the winners [0] in this.
But one thing you cannot do, is openly admit or to be found out to say something like: "I don't know a single line of Rust/Go/Typescript/$LANG code but I used an AI to do all of it" and the system breaks down and you can't fix it.
It would be quite difficult to take a SWE seriously that prides themselves in having zero understanding and experience of building production systems and runs the risk of losing the company time and money.
[0] >>46772520
I really do want to live in the world where P = NP and we can trivially get P time algorithms for believed to be NP problems.
I reject your reality and substitute my own.