zlacker

[parent] [thread] 1 comments
1. Libidi+(OP)[view] [source] 2025-12-06 15:49:54
I just can't imagine we are close to letting LLMs do electrical work.

What I notice that I don't see talked about much is how "steerable" the output is.

I think this is a big reason 1 shots are used as examples.

Once you get past 1 shots, so much of the output is dependent on the context the previous prompts have created.

Instead of 1 shots , try something that requires 3 different prompts on a subject with uncertainty involved. Do 4 or 5 iterations and often you will get wildly different results.

It doesn't seem like we have a word for this. A "hallucination" is when we know what the output should be and it is just wrong. This is like the user steers the model towards an answer but there is a lot of uncertainty in what the right answer even would be.

To me this always comes back to the problem that the models are not grounded in reality.

Letting LLMs do electric work without grounding in reality would be insane. No pun intended.

replies(1): >>knolli+u5
2. knolli+u5[view] [source] 2025-12-06 16:31:23
>>Libidi+(OP)
You'd have to make subagents call tools that limit context and give them only the tools they need with explicit instructions.

I think they'll never be great at switchgear rooms but apartment outlet circuitry? Why not?

I have a very rigid workflow with what I want as outputs, so if I shape the inputs using an LLM it's promising. You don't need to automate everything; high level choices should be done by a human.

[go to top]