Your comment gets to the crux of my thinking about LLM coding. The way I think of what LLM coding is doing is decompressing your prompt into code based on the statistical likelihood of that decompression based on training data. Basically "Build me an IOS app" -> a concrete implementation of an iOS app. The issue here is that the user supplying the prompt needs to encode all of the potential variables that the AI needs to work with into the prompt, or else the implementation will just be based on the "bog-standard" iOS app based on the training corpus, although with potential differences in the app based on other tokens in the prompt. Is natural language the right way to encode that information? Do we want to rely on input tokens to the context of a model successfully making it into the output to guarantee accuracy? I think the Kiro Spec driven development starts to get at addressing the inherent issues in LLM based coding assistance, but it is an early step.