Current AIs often do a bad job of that. Sure, they know a lot of it. But they also get a lot of it wrong, and can’t tell the difference between genuinely good advice, and advice that sounds good but is practically worthless or even harmful.
(Of course I’m biased since I work for a SaaS firm. But I’m talking about them in general, not just my current employer.)
From what I've personally seen in SaaS AI agent development – if you try to build an AI agent to give customers advice in a particular business domain, you need to do a huge amount of work validating the answer quality with actual domain experts, and adjusting the prompts / RAG documents / tool design / etc to make sure it is giving genuinely useful advice. It is really easy to build a system which generates output which sounds superficially good, but an actual domain expert will consider wrong or worthless.