They just hadn't -- and still haven't -- figured out how to commercialize it yet. I don't think they'll be the ones to crack that nut either. IMO they are too obsessed with "safety" to release something useful, and also can't reasonably deploy a service like ChatGPT at their scale because the costs are too high.
With OpenAI imploding this whole race just got a lot more interesting though...
Bard was likely not trained on copyrightable data, that makes it safe from lawsuits but also removes most of the usecases people want ChatGPT for.
And it isn't just about lawsuits, since Google need to keep advertisers happy or they would leave like they leave Elon Musk they can't afford to jeapordise that with questionable launches.
https://innovationorigins.com/en/openai-and-googles-bard-acc...
For very profitable things. This isn't very profitable, which is why I added that part to my comment. Google has a very good understanding what they get sued for and how much those lawsuits costs, if it is profitable anyway they go ahead.
Scaling of training was the challenge back then (of course).
Google was already too corporate. Please remember that Sergey Brin and Larry Page were no longer at the steering wheel back then. I have been told that it was also a cultural issue linked to "delivering brilliance". Simplifying: Google promoted tiny teams or individual contributors building things that had to become a massive success quickly. Open AI took a number of hand picked brilliant people and let them work together on a common goal, silently, for quite some time.
Some companies just have an unfair advantage. A certain magic. And OpenAI's magic is at risk right now.