zlacker

[parent] [thread] 1 comments
1. davnic+(OP)[view] [source] 2026-02-04 19:29:47
yeah I think this is exactly how the analogy breaks down.

As humans we need to specialise. Even though we're generalists and have the a priori potential to learn and do all manner of things we have to pick just a few to focus on to be effective (the beautiful dilemma etc).

I think the basic reason being we're limited by learning time and, relatedly, execution bandwidth of how many things we can reasonably do in a given time period.

LLMs don't have these constraints in the same way. As you say they come preloaded with absolutely everything all at once. There's no or very little marginal time investment per se in learning anything. As for output bandwidth, it also scales horizontally with compute supplied.

So I just think the inherent limitations that make us organise human work around this individual unit working in teams and whatnot don't apply and are counterproductive to apply. There's a real cost to all that stuff that LLMs can just sidestep around, and that's part of the power of the new paradigm that shouldn't be left on the table.

replies(1): >>tehjok+0k
2. tehjok+0k[view] [source] 2026-02-04 21:00:58
>>davnic+(OP)
I suppose the AI can wear many "hats" simultaneously, but it does have to be competent enough at at least one or two of them for that to be viable. I think one way to think about that is roles can be consolidated.
[go to top]