zlacker

[parent] [thread] 0 comments
1. to11mt+(OP)[view] [source] 2023-11-20 01:28:14
That's kinda weasel-y in itself.

If a model is not safe, the access should be limited in general.

Or, from a business model perspective; a 'sane' nonprofit doing what OpenAI should, at least in my mind, be able to do the following harmoniously:

1. Release new models that do the same thing they make others allow access do to via their 'products' with reasonable instructions on how to run them on-prem (i.e. I'm not saying what they do has to be fully runnable on a single local box, but it should be reproducible as a nonprofit purportedly geared towards research.)

2. Provide on-line access to models with a cost model that lets others use while furthering the foundation.

3. Provides enough overall value in what they do that outside parties invest regardless of whether they are guaranteed a specific individual return.

4. Not allow potentially unsafe models to be available via less than both research branches.

Perhaps, however, I am too idealistic.

On the other hand, Point 4 is important, because we can never know under the current model, whether a previous unsafe model has been truly 'patched' for all variations of a model.

OTOH, if a given model would violate Point 4, I do not trust the current org to properly disclose the found gaps; better to quietly patch the UI and intermediate layers than ask whether a fix can be worked around with different wording.

[go to top]