The main reason to worry, though, is not the proprietary monetization of "AI" algorithms: Just like it was not an algorithm (pagerank) but the invention of adtech that spawned surveillance capitalism, here too the main question is what sort of "disruption" can this tech facilitate, as in which social contract will be violated in order to "create value".
"Success" in "tech" has for a long time been predicated on the absence of any regulation, pushback or controls when applying software technology in social / economic spheres previously operating under different moral conventions. In the name of "not stiffling innovation".
Ironically our main protection is that we may actually now live a "scorched Earth" environment. The easy disruptions are done and "tech innovation" is bumping against domains (finance, medical) that are "sensitive".
I think we're already getting a taste of it with copyrighted but publicly accessible works getting fed into the training step of AI models. The economic benefits of this training then accrue to the model's owners, while the creators of the training data have to pay for access.
It seems as though AI models improve with more training data, so I expect AI companies to come for ostensibly private data next. Microsoft is actually really well positioned here since they've already acclimatized their user base to transmitting an endless stream of telemetry and they own the dominant desktop/laptop OS.