To be frank, they need to really spell out what "benefitting mankind" is. How is it measured? Or is it measured? Or is it just "the board says this isn't doing that so it's not doing that"?
It's honestly a silly slogan.
- Not limiting access to a universally profitable technology by making it only accessible to highest bidder (e.g. hire our virtual assistants for 30k a year).
- Making models with a mind to all threats (existential, job replacement, scam uses)
- Potentially open-sourcing models that are deemed safe
So far I genuinely believe they are doing the first two and leaving billions on the table they could get by jacking their price 10x or more.
If a model is not safe, the access should be limited in general.
Or, from a business model perspective; a 'sane' nonprofit doing what OpenAI should, at least in my mind, be able to do the following harmoniously:
1. Release new models that do the same thing they make others allow access do to via their 'products' with reasonable instructions on how to run them on-prem (i.e. I'm not saying what they do has to be fully runnable on a single local box, but it should be reproducible as a nonprofit purportedly geared towards research.)
2. Provide on-line access to models with a cost model that lets others use while furthering the foundation.
3. Provides enough overall value in what they do that outside parties invest regardless of whether they are guaranteed a specific individual return.
4. Not allow potentially unsafe models to be available via less than both research branches.
Perhaps, however, I am too idealistic.
On the other hand, Point 4 is important, because we can never know under the current model, whether a previous unsafe model has been truly 'patched' for all variations of a model.
OTOH, if a given model would violate Point 4, I do not trust the current org to properly disclose the found gaps; better to quietly patch the UI and intermediate layers than ask whether a fix can be worked around with different wording.