This is the most ridiculous thing I've ever heard claimed about AI. They finally have a crappy algorithm that can sound confident, even though over half its answers are complete bullshit. With this accomplishment, they now expect in 10 years it will be able to do any job, better than an expert. This is some major league swing-for-the-fences bullshit.
And that's not the worst part. Let's say the fancy algorithm has real pixie dust that just magically gives better-than-expert answers for literally any question. That still leaves it to the human to ask the questions. How much do you want to bet a police force won't use AI by submitting a random picture of a young black male suspect and asking it "What is the likelihood this person has committed a crime, and what was the crime?" The AI just interprets the question and answers it, and the human just accepts the answer, even though the premise is ridiculous.
We won't create a real intelligent AI any time soon. But even if we did, the AI being perfect or not isn't the problem. The problem is the stupid humans using it. You can't "design", regulate, govern, etc your way out of human stupidity.
November 30, 2022: ChatGPT launches, two to three months after GPT-4 internal training has finished.
That's quite a lot of progress in 10 years. But somehow you are convinced no similar groundbreaking stuff will happen in the next 10 years.
180 years ago the electric car was born, and about 105 years ago electric cars were the most popular car. Electric cars are finally becoming popular again but are still dwarfed by internal combustion.
Progress isn't linear, and the last 10% takes 90% of the effort. Without a very specific and heightened pace of work, the work will become monotonous and innovation will drag to a halt. It's not that we can't make groundbreaking work, it's that it's much harder and more expensive than we care for.