In relation to other comments here. There is "coding" and there is "God's spark genius of algorithms" kind of work. This is what made the magic of OpenAI. Believe me, those guys were not "just coding". My bet is that it could be all about some research directions that were "shielded" by Sam.
As far as I can tell, all three of them are of Polish descent. For all we know they might have decided to resign together even if only one of them had a personal issue with OpenAI's vision. We will find out soon enough whether they will just found their own competing startup, based on OpenAI's "secret sauce" or not.
Nothing to do with being Polish in particular. Only that there is a connecting element that might help explain why these 3 decided to resign together on the same day.
I really don't buy that for a second. Most of OpenAI's value compared to any competitor comes from the money they spent hiring humans to trawl through training data.
They just hadn't -- and still haven't -- figured out how to commercialize it yet. I don't think they'll be the ones to crack that nut either. IMO they are too obsessed with "safety" to release something useful, and also can't reasonably deploy a service like ChatGPT at their scale because the costs are too high.
With OpenAI imploding this whole race just got a lot more interesting though...
Bard was likely not trained on copyrightable data, that makes it safe from lawsuits but also removes most of the usecases people want ChatGPT for.
And it isn't just about lawsuits, since Google need to keep advertisers happy or they would leave like they leave Elon Musk they can't afford to jeapordise that with questionable launches.
https://innovationorigins.com/en/openai-and-googles-bard-acc...
For very profitable things. This isn't very profitable, which is why I added that part to my comment. Google has a very good understanding what they get sued for and how much those lawsuits costs, if it is profitable anyway they go ahead.
In short, either they didn't or where unable to create a favorable enough environment for this to flourish.
Scaling of training was the challenge back then (of course).
Google was already too corporate. Please remember that Sergey Brin and Larry Page were no longer at the steering wheel back then. I have been told that it was also a cultural issue linked to "delivering brilliance". Simplifying: Google promoted tiny teams or individual contributors building things that had to become a massive success quickly. Open AI took a number of hand picked brilliant people and let them work together on a common goal, silently, for quite some time.
Some companies just have an unfair advantage. A certain magic. And OpenAI's magic is at risk right now.