zlacker

[return to "Governance of Superintelligence"]
1. 0xbadc+ya[view] [source] 2023-05-22 18:31:08
>>davidb+(OP)
> Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations.

This is the most ridiculous thing I've ever heard claimed about AI. They finally have a crappy algorithm that can sound confident, even though over half its answers are complete bullshit. With this accomplishment, they now expect in 10 years it will be able to do any job, better than an expert. This is some major league swing-for-the-fences bullshit.

And that's not the worst part. Let's say the fancy algorithm has real pixie dust that just magically gives better-than-expert answers for literally any question. That still leaves it to the human to ask the questions. How much do you want to bet a police force won't use AI by submitting a random picture of a young black male suspect and asking it "What is the likelihood this person has committed a crime, and what was the crime?" The AI just interprets the question and answers it, and the human just accepts the answer, even though the premise is ridiculous.

We won't create a real intelligent AI any time soon. But even if we did, the AI being perfect or not isn't the problem. The problem is the stupid humans using it. You can't "design", regulate, govern, etc your way out of human stupidity.

◧◩
2. Animal+Uo[view] [source] 2023-05-22 19:51:11
>>0xbadc+ya
More cynically, this could be a claim that half of the expert's answers are also BS.

Does anyone feel confident in their ability to disprove that one?

[go to top]