That's a big part of the issue with machine learning models--they are undiscoverable. You build a model with a bunch of layers and hyperparameters, but no one really understands how it works or by extension how to "fix bugs".
If we say it "does what it was programmed to", what was it programmed to do? Here is the data that was used to train it, but how will it respond to a given input? Who knows?
That does not mean that they need to be heavily regulated. On the contrary, they need to be opened up and thoroughly "explored" before we can "entrust" them to given functions.
Who's to say we're not in a simulation ? Who's to say god doesn't exist ?
Until a model of human sentience and awareness is established (note: one of the oldest problems out there alongside the movements of the stars. This is an ancient debate, still open-ended, and nothing anyone is saying in these threads is new), philosophy is all we have and ideas are debated on their merits within that space.
Its output is predicated upon its training data, not user defined prompts.
its essentially a function that is called recursively on its result, no need to represent state
You're conflating UX and LLM.
Incorrect, we can't predict its output because we cannot look inside. That's a limitation, not a feature.
Prompts very obviously have influence on the output.
I don't think this is accurate. Sure, no human can understand 500 billion individual neurons and what they are doing. But you can certainly look at some and say "these are giving a huge weight to this word especially in this context and that's weighting it towards this output".
You can also look at how things make it through the network, the impact of hyperparameters, how the architecture affects things, etc. They aren't truly black boxes except by virtue of scale. You could use automated processes to find out things about the networks as well.