Plugins were a failure. GPTs are a little better, but I still don't see the product market fit. GPT-4 is still king, but not by that much any more. It's not even clear that they're doing great research, because they don't publish.
GPT-5 has to be incredibly good at this point, and I'm not sure that it will be.
As the popularity has exploded, and ethical questions have become increasingly relevant, it is probably worth taking some time to nail certain aspects down before releasing everything to the public for the sake of being first.
Altman saga, allowing military use and other small things step by step tarnish your reputation and pushes you to the mediocrity or worse.
Microsoft has many great development stories (read Raymond Chen's blog to be awed), but what they did at the end to other competitors and how they behave removed their luster, permanently for some people.
That would actually increase their standing in my eyes.
Not too far from where I live, Russian bombing is destroying homes of people whose language is similar to mine and whose "fault" is that they don't want to submit to rule from Moscow, direct or indirect.
If OpenAI can somehow help stop that, I am all for it.
And, according to UN, Turkey has used AI powered, autonomous littering drones to hit military convoys in Libya [1].
Regardless of us vs. them, AI shouldn't be a part of warfare, IMHO.
[0]: https://www.theguardian.com/world/2023/dec/01/the-gospel-how...
[1]: https://www.voanews.com/a/africa_possible-first-use-ai-armed...
Nor should nuclear weapons, guns, knives, or cudgels.
But we don’t have a way to stop them being used.
The second that this tech was developed it became literally impossible to stop this from happening. It was a totally foreseeable consequence, but the researchers involved didn't care because they wanted to be successful and figured they could just try to blame others for the consequences of their actions.
Such an absurdly reductive take. Or how about just like nuclear energy and knives, they are incredibly useful, society advancing tools that can also be used to cause harm. It's not as if AI can only be used for warfare. And like pretty much every technology, it ends up being used 99.9% for good, and 0.1% for evil.