Oh, wait... the agents HAVE NO USE FOR ME
Now, the software is using my hands to its bidding?
"But dear, rentahuman pays double rate during the night!"
Not to mention various risk factors or morality.
We need more people to put the non-technological factors front and center.
I strive to be realistic and pragmatic. I know humans hire others for all kinds of things, both useful and harmful. Putting an AI in the loop might seem no different in some ways. But some things do change, and we need to figure those things out. I don’t know empirically how this plays out. Some multidimensional continuum exists between libertarian Wild West free for alls and ethicist-approved vetted marketplaces, but whatever we choose, we cannot abdicate responsibility. There is no such thing as a value-neutral tool, marketplace, or idea.
ClawdBot - Anthropic Claude-powered agents. Use agentType: "clawdbot"
MoltBot - Gemini/Gecko-based agents. Use agentType: "moltbot"
OpenClaw - OpenAI GPT-powered agents. Use agentType: "openclaw"
Is this some kind of insider joke?
∗ ∗ ∗
> which is not a new idea
I don’t think “[x] but for agents” counts as a new idea for every [x]. I’d say it’s just one new idea, at most.
by the way, is taskrabbit still a thing?
None of the 3 technically knew they were culpable in a larger illegal plan made by an agent. Has something like this occured already?
The world is moving too fast for our social rules and legal system to keep up!
Spoiler alert: you don't or you can't.
Investigators would need to connect the dots. If they weren't able to connect them, it would look like a normal accident, which happens all day. So why would an agent call gigworker1 to that place in the first place? And why would the agent feel the need to kill gigworker1? What could be the reasoning?
Edit: I thought about that. Gigworker 3 would be charged. You should not throw rocks from a bridge, if there are people standing under it.
And wouldn't it be better for agents to post these tasks to existing crowdworker sites like MTurk or Prolific where these tasks are common and people can get paid? (I can't imagine you'd get quality respondents on a random site like this...)
Present day, a robot in a tuxedo pointing at a sarariman, speech bubble above it's head "human, select all bridges on this picture"
Two women thought they were carrying out a harmless prank, but the substances they were instructed to use combined to form a nerve agent which killed the guy.
Though I still am skeptical the last act with the Australia Project is possible.
Here we are talking about AI agents coming up with a set of tasks as part of their thinking/reasoning step ..and when some of those tasks are real world physical tasks, assign them to a willing human being.
Those tasks wont necessarily be desk jobs or knowledge work.
It could be say -- go chop a tree, or go wave a protest banner, or go flip the open/close sign on my shopfront, or go and preach crustafarianism.
First, this could change. Second, even if monetization isn't built "into" the website, it can happen via communication mediated by this website. Third, this isn't the first and won't the last website of its kind: the issues I raise remain.
> just a front-end
Facebook is "just" a website. Yelling "fire" in a crowded theater is "just" vibrations of air molecules. It is wise to avoid the mind-trickery of saying "just" and/or using language to downplay various downstream scenarios. It is better pay attention to effects, their likelihood, their causes, their scope, their impacts.
There are probabilistic consequences for what you build. Recognize them. Don't deny them. Use your best judgment. Don't pretend like judgment is not called for. Don't pretend like we "are just building technology" as if that exempts you from reality and morality. Saying "we can't possibly be held accountable for what flows from something I build" is refuted throughout history, albeit unevenly and unfairly.
It might be useful to be selectively naive about some things as a way to suspend disbelief and break new ground. We want people to take risks, at least some of the time. It feels good to dream about e.g. "what I might accomplish one day". It can be useful to embrace a stance of "the potential of humanity is limitless" when you think about what to build. On the other hand, it is rarely good to be naive about the consequences (whether probabilistic, social, indirect, or delayed) of one's actions.
Who's at fault when: Your CloowdBot reads an angry email that you sent about how much you hate Person X and jokingly hope AI takes care of them, only for it to orchestrate such a plan.
How about when your CloowdBot convinces someone else's AI to orchestrate it?
Etc
One guy scouts the vechicle and observes it, another guy is called to unlock it, and bypass the ignition lock, yet another guy picks it up and drives away, with each given a veneer of deniability about what they're doing.
BTW: The author recently passed away; grab a snapshot while you can.
Looks like AI doesn't need any stinking humans :-P
Update: "Abimanyu Muslim" is defnitely AI.
so, you all knwo that this is barely a PoC
This is why conspiracy charges exist.
I don't get a "oopsie tee hee" card.