they're acquiring one of the biggest the front doors to developers, with Windsurf - whether it'll _remain_ in fashion or not, that's a different debate. This can be like facebook acquiring instagram (if developers turn out to be the actual profit-driver niche for LLMs, which currently seems to be the case)
AI is definitely huge for anyone writing code, though one can imagine a model like o3 completely replacing 90% of white collar jobs that involve reading, writing and analysis.
Interestingly, o3 is particularly bad at legalese, likely not fully by accident. Of all professions whose professional organizations and regulatory capture create huge rents, the legal profession is the most ripe for disruption.
It's not uncommon for lawyers to bill $250 to $500 per hour for producing boilerplate language. Contracts reviewed or drawn up by lawyers never come with any guarantees either, so one does not learn until too late that the lawyer overlooked something important. Most lawyers have above average IQs and understand arcane things, but most of it is pretty basic at its core.
Lawyers, Pharmacists, many doctors, nearly all accountants, and most middle managers will be replaceable by AI agents.
Software engineers are still expected to produce novel outputs unlike those other fields, so there is still room for humans to pilot the machine for a while. And since most software is meant to be used by humans, soon software will need to be usable by AI agents, which will reduce a lot of UI to an MCP.
Honestly, same for doctors and accountants. Unless these model providers are willing to provide "guarantees" that they will compensate for damages faced as a result of their output.
Doctors and Lawyers are required in many areas to carry malpractice insurance. Good luck getting "hot new AI legal startup" to sign off on that.
The most obviously "lethal" case (cars) is already in large scale rollout worldwide.
At scale, self-driving car "errors" will fall under general liability insurance coverage, most likely. Firms will probably carry some insurance as well just in case.
LLMs already write better prose than 95% of humans and models like o3 reason better than 90% of humans on many tasks.
In both law and medicine there are many pre-existing safeguards that have been created to reduce error rates for human practitioners (checklists, text search tools (lexis nexis, uptodate, etc.), continuing education, etc.) which can be applied to AI professionals too.
Wake me up when there’s any evidence of this whatsoever. Pure fantasy.
That's how we will get to $20,000/month agents.
Except except lawyers are ~.4%[1] of the population in the United States, so that 95% isn’t very impressive
[1] https://www.americanbar.org/news/profile-legal-profession/de...
I think the mistake people make is misunderstanding the slope of the S-curve and instead quibbling over the exact nature of the current reality. AI is moving very fast. A few years ago I'd have said that at most 25% of legal work could fall to AI.
Note that this massive change happened in less time than it takes to educate one class of law school grads!
Writing good prose is a far different skill than coming up with a compelling and innovative plot and style.
As a data point, OpenAI now blocks o3 from doing the "continue where the story left off" test on works of fiction. It says "Sorry, I can't do that".