I love that people do these ridiculous but inventive things. Never would have imagined this one.
I did my senior thesis/project in CS (we had to do several, it was anticlimactic) about visual programming, and basic paradigms that might be the future.
I ended up writing a missive about labview holding people back, because 2D planes suck at communicating information to people who otherwise read books and blogs and C# code.
My conclusion 15 years later is that we’ll talk to LLMs and their successors rather than invent a great graphical user interface that works like a desktop or a <table> or even a repl.
Star Trek may have inspired the ipad and terrible polygon capacitive touchscreens… but we all know that “Computer, search for M-class planets without fans of Nickelback’s second album living there as of stardate 2024” is already basically a reality.
EDIT: I like this CPU experiment too! It is a great example of the thing I’m talking about. Realized after the fact that I failed to plant my context in my comment, before doing my graybeard routine.
So. Food for thought, our LLM overlords are just unfathomable spreadsheets.
Natural language programming is not going to happen either because natural languages are to ambiguous. You can probably write code iteratively in a natural language in some kind of dialog clarifying things as ambiguities arise, but using that dialog as the source of truth and treating the resulting code as a derivative output sounds not very useful to me. So if I had to bet, I would bet that text based programming languages are not going anywhere soon.
Maybe one day there will be no code at all, everything will just contain small artificial brains doing the things we want them to do without anything we would recognize as a program today, but who knows and seems not really worth spaculating about to me.
In the nearer term I could see domain specific languages becoming a more prevalent thing. A huge amount of the code we write are technical details because we have to express even the highest level business logic in terms of booleans, integers and strings. If we had a dozen different languages tailored to different aspects of an application, we could write a lot less code.
We have this to a certain extent, a language for code in general, one for querying data, one for laying out the user interface, one for styling it. But they are badly integrated and not customizable. The problem is of course that developing good languages and evolving them is hard and lowering them into some base language is tedious work. But in principle I could imagine that progress is possible at this front and that this becomes practical.
Not saying graphical programming is a good idea, but the basic abstraction mechanism is to define new boxes, which you can look inside by opening (like some VHDL modeling tools do). Even in SICP it is said it is a bad idea and do not escalate. But is clear that the primitive is “the box” the means of combination, the lines between boxes, and mean of abstraction is making new boxes.
I thin the real problem is that there is exactly 1 primitive, one mean of abstraction, and one of combination, and that seems to not be enough.
I did a huge amount of Excel with elaborate VB macros. Thinking back, it strikes me as odd that a dataflow programming tool used a conventional language as its macro language.
I think this has potential. As we all known, natural language is a weak tool to express logic. On the other hand, programming languages are limited by their feature set and paradigmatic alignment. But whatever code language we use to express a particular software product, the yield for the end user is virtually the same. I mean, how the logic is laid and worked out practically has no effect on the perceived functionality, i.e. a button programmed to display an alert on the screen can be programmed in numerous languages but the effect is always the same. If however we had like drivers and APIs for everything we could possibly need in the course of designing a program, then we could just emit structured data to endpoints in a data flow fashion, such that the program is manifested as a managed activation pattern. In this scenario, different APIs could have different schemas and those effectively be synthesized through specialized syntax, hence nano DSLs for each task. It would be not so different, conceptually, from the very same ISAs embedded in processors: each instruction has its own syntax and semantics, it’s only very regular and simple. But for the scenario of pure composability to work at a high level, we would need to fully rework the ecosystem and platforms. I mean, in this context a single computer would need to work like a distributed system with homogeneous design and tightly integrated semantics for all its resident components.
You won’t be using ChatGPT to write source code that you copy and paste, and debug.
You’ll be saying “no she was wearing a shorter dress, with flowers,” like Geordi LaForge using the holodeck to solve mysteries.
The boilerplate below won’t even be necessary. Here’s how I see it working though:
“Hey, welcome to Earth. So here’s the deal. You maintain my website that drop-ships pokemon cards using paypal merchant integration. You will have a team of AI designers who you will hire for specific skills by designing a plan with detailed job descriptions.
I want one guy to just make funny comments in the PR history. Make it look like cyberpunk. Respect EU privacy laws by maintaining regional databases, and hire another agent who has a JD to follow similar regulatory requirements in the news.
I hate oracle databases and j2ee, use anything else ;)”
I think there has been evolution in the underlying data computation side of things, but there are still unsolved questions about 'visibility' of graphical node based approaches. A node based editor is easy to write with, hard to read with.