They've achieved marvellous things, OpenAI, but the pivot and long-standing refusal to deal with it honestly leaves an unpleasant taste, and doesn't bode well for the future, especially considering the enormous ethical implications of advantage in the field they are leading.
They don't have a moat big enough that many millions of dollars can't defeat.
It surely will have huge blindspots (and people do too), but perhaps it will be good enough for self-improvement... or will be soon.
It can't just "self-improve towards general intelligence".
What's the fitness function of intelligence?
Can ChatGPT evaluate how good ChatGPT-generated output is? This seems prone to exaggerating blind-spots, but OTOH, creating and criticising are different skills, and criticising is usually easier.