The most obviously "lethal" case (cars) is already in large scale rollout worldwide.
At scale, self-driving car "errors" will fall under general liability insurance coverage, most likely. Firms will probably carry some insurance as well just in case.
LLMs already write better prose than 95% of humans and models like o3 reason better than 90% of humans on many tasks.
In both law and medicine there are many pre-existing safeguards that have been created to reduce error rates for human practitioners (checklists, text search tools (lexis nexis, uptodate, etc.), continuing education, etc.) which can be applied to AI professionals too.
Except except lawyers are ~.4%[1] of the population in the United States, so that 95% isn’t very impressive
[1] https://www.americanbar.org/news/profile-legal-profession/de...
I think the mistake people make is misunderstanding the slope of the S-curve and instead quibbling over the exact nature of the current reality. AI is moving very fast. A few years ago I'd have said that at most 25% of legal work could fall to AI.
Note that this massive change happened in less time than it takes to educate one class of law school grads!
Writing good prose is a far different skill than coming up with a compelling and innovative plot and style.
As a data point, OpenAI now blocks o3 from doing the "continue where the story left off" test on works of fiction. It says "Sorry, I can't do that".