Either way, if it's enough to cause them both to think it's better to research outside of the opportunities and access to data that OpenAI provides, I don't see a scenario where this doesn't indicate a significant shift in OpenAI's commitment to superalignment research and safety. One hopes that, at the very least, Microsoft's interests in brand integrity incentivize at least some modicum of continued commitment to safety research.
Besides the infamous Tay there was that apparently un-aligned Wizard-2[or something like that] model from them which got released by mistake for about 12 hours
We can’t just drop papers on arxiv. There is no way running your own twitter, github, etc as a separate group allowed.
I checked fairly recently to see if the model was actually released again, it doesn’t seem to be; I find this telling.
Tay way trivially racist, but boy was Sydney a wacko.