I think this has potential. As we all known, natural language is a weak tool to express logic. On the other hand, programming languages are limited by their feature set and paradigmatic alignment. But whatever code language we use to express a particular software product, the yield for the end user is virtually the same. I mean, how the logic is laid and worked out practically has no effect on the perceived functionality, i.e. a button programmed to display an alert on the screen can be programmed in numerous languages but the effect is always the same. If however we had like drivers and APIs for everything we could possibly need in the course of designing a program, then we could just emit structured data to endpoints in a data flow fashion, such that the program is manifested as a managed activation pattern. In this scenario, different APIs could have different schemas and those effectively be synthesized through specialized syntax, hence nano DSLs for each task. It would be not so different, conceptually, from the very same ISAs embedded in processors: each instruction has its own syntax and semantics, it’s only very regular and simple. But for the scenario of pure composability to work at a high level, we would need to fully rework the ecosystem and platforms. I mean, in this context a single computer would need to work like a distributed system with homogeneous design and tightly integrated semantics for all its resident components.