zlacker

Ask HN: When do we expose "Humans as Tools" so LLM agents can call us on demand?

submitted by vedmak+(OP) on 2026-01-01 18:09:31 | 48 points 31 comments
[source] [go to bottom]

Serious question.

We're building agentic LLM systems that can plan, reason, and call tools via MCP. Today those tools are APIs. But many real-world tasks still require humans.

So… why not expose humans as tools?

Imagine TaskRabbit or Fiverr running MCP servers where an LLM agent can:

- Call a human for judgment, creativity, or physical actions

- Pass structured inputs

- Receive structured outputs back into its loop

At that point, humans become just another dependency in an agent's toolchain. Though slower, more expensive, but occasionally necessary.

Yes, this sounds dystopian. Yes, it treats humans as "servants for AI." Thats kind of the point. It already happens manually... this just formalizes the interface.

Questions I'm genuinely curious about:

- Is this inevitable once agents become default software actors? (As of basically now?)

- What breaks first: economics, safety, human dignity or regulation?

- Would marketplaces ever embrace being "human execution layers" for AI?

Not sure if this is the future or a cursed idea we should actively prevent... but it feels uncomfortably plausible.


NOTE: showing posts with links only show all posts
3. bitwiz+Ce[view] [source] 2026-01-01 19:41:43
>>vedmak+(OP)
https://en.wikipedia.org/wiki/Manna_(novel)
5. notjul+vr[view] [source] 2026-01-01 21:20:15
>>vedmak+(OP)
I know nothing about this other than I thought it was a joke at first, but I think it's the same idea https://github.com/RapidataAI/human-use
◧◩
25. gnz11+o82[view] [source] [discussion] 2026-01-02 13:37:39
>>bitwiz+Ce
Player Piano by Kurt Vonnegut also comes to mind. https://en.wikipedia.org/wiki/Player_Piano_%28novel%29
◧◩
28. impend+bj3[view] [source] [discussion] 2026-01-02 20:43:29
>>taurat+Ed
Indeed, I wonder if these angry young people would try to fuck with these AI agents, and attempt to make them spin in circles for their own amusement.

Sort of like the infamous GameStop short squeeze of 2021:

https://en.wikipedia.org/wiki/GameStop_short_squeeze

29. zeroco+4F3[view] [source] 2026-01-02 22:50:32
>>vedmak+(OP)
The framing assumes cloud-first AI agents as the default caller. But there's another path: local-first AI where the human remains the orchestrator and the model never phones home.

The "humans as tools" model only works if the AI layer is centralized and owned by platforms. If inference runs on hardware you control, you're not callable - you're the one calling.

Been thinking about this a lot: https://www.localghost.ai/reckoning

[go to top]