You’re taking a bunch of pre-built abstractions written by other people on top of what the computer is actually doing and plugging them together like LEGOs. The artificial syntax that you use to move the bricks around is the thing you call coding.
The human element of discovery is still there if a robot stacks the bricks based on a different set of syntax (Natural Language), nothing about that precludes authenticity or the human element of creation.
This actually leaves me with a lot more time to think, about what I want the UI to look like, how I'll market my software, and so on.
I find languages like JavaScript promote the idea that of “Lego programming” because you’re encouraged to use a module for everything.
But when you start exploring ideas that haven’t been thoroughly explored already, and particularly in systems languages which are less zealous about DRY (don’t repeat yourself) methodologies, the you can feel a lot more like a sculptor.
Likewise if you’re building frameworks rather than reusing them.
So it really depends on the problems you’re solving.
For general day-to-day coding for your average 9-to-5 software engineering job, I can definitely relate to why people might think coding is basically “LEGO engineering”.
Isn't the analogy apt? You can't make a working car using a lump of clay, just a car statue, a lump of clay is already an abstraction of objects you can make in reality.
Correct. However, you will probably notice that your solution to the problem doesn't feel right, when the bricks that are available to you, don't compose well. The AI will just happily smash together bricks and at first glance it might seem that the task is done.
Choosing the right abstraction (bricks) is part of finding the right solution. And understanding that choice often requires exploration and contemplation. AI can't give you that.
The other day people were talking about metrics, the amount of lines of code people vs LLMs could output in any given time, or the lines of code in an LLM assisted application - using LOC as a metric for productivity.
But would an LLM ever suggest using a utility or library, or re-architecture an application, over writing their own code?
I've got a fairly simple application, renders a table (and in future some charts) with metrics. At the moment all that is done "by hand", last features were stuff like filtering and sorting the data. But that kind of thing can also be done by a "data table" library. Or the whole application can be thrown out in favor of a workbook (one of those data analysis tools, I'm not at home in that are at all). That'd save hundreds of lines of code + maintenance burden.
I can do some crud apps where it's just data input to data store to output with little shaping needed. Or I can do apps where there's lots of filters, actions and logic to happen based on what's inputted that require some thought to ensure actually solve the problem it's proposed for.
"Shaping the clay" isn't about the clay, it's about the shaping. If you have to make a ball of clay and also have to make a bridge of Lego a 175kg human can stand on, you'll learn more about Lego and building it than you will about clay.
Get someone to give you a Lego instruction sheet and you'll learn far less, because you're not shaping anymore.
We’ve created formal notation to shorten writing. And computation is formal notation that is actually useful. Why write pages of specs when I could write a few lines of code?
The risk of LLMs laying more of these bricks isn't just loss of authenticity and less human elements of discovery and creation, it's further down the path of "there's only one instruction manual in the Lego box, and that's all the robots know and build for you". It's an increased commodification of a few legacy designers' worth of work over a larger creative space than at first seems apparent.
Software developers can use the exact same "lego block" abstractions ("this code just multiplies two numbers") and tell very different stories with it ("this code is the formula for force power", "this code computes a probability of two events occurring", "this code gives us our progress bar state as the combination of two sub-processes", etc).
LLMs have only so many "stories" they are trained on, and so many ways of thinking about the "why" of a piece of code rather than mechanical "what".
Software engineering is all about making sure the what actually solves the why, making the why visible enough in the what so that we can modify the latter if the former changes (it always does).
Current LLM are not about transforming a why into a what. It’s about transforming an underspecified what into some what that we hope fits the why. But as we all know from the 5 Why method, why’s are recursive structure, and most software engineer is about diving into the details of the why. The what are easy once that done because computers are simple mechanisms if you chose the correct level of abstraction for the project.
Aren't Legos known for their ability to enable creativity and endless possibilities? It doesn't feel that different from the clay analogy, except a bit coarser grained.