I have a hypothesis that an LLM can act as a pseudocode to code translator, where the pseudocode can tolerate a mixture of code-like and natural language specification. The benefit being that it formalizes the human as the specifier (which must be done anyway) and the llm as the code writer. This also might enable lower resource “non-frontier” models to be more useful. Additionally, it allows tolerance to syntax mistakes or in the worst case, natural language if needed.
In other words, I think llms don’t need new languages, we do.
Consider:
"Eat grandma if you're hungry"
"Eat grandma, if you're hungry"
"Eat grandma. if you're hungry"
Same words and entirely different outcome.
Pseudo code to clarify:
[Action | Directive - Eat] [Subject - Grandma] [Conditional of Subject - if hungry]