zlacker

[return to "OpenAI negotiations to reinstate Altman hit snag over board role"]
1. jasonh+4t[view] [source] 2023-11-19 22:52:33
>>himara+(OP)
This is why, when you claim to be running a non-profit to "benefit humankind," you shouldn't put all your resources into a for-profit subsidiary. Eventually, the for-profit arm, and its investors, will find its nonprofit parent a hindrance, and an insular board of directors won't stand a chance against corporate titans.
◧◩
2. silenc+DN[view] [source] 2023-11-20 00:53:20
>>jasonh+4t
> This is why, when you claim to be running a non-profit to "benefit humankind," you shouldn't put all your resources into a for-profit subsidiary.

To be frank, they need to really spell out what "benefitting mankind" is. How is it measured? Or is it measured? Or is it just "the board says this isn't doing that so it's not doing that"?

It's honestly a silly slogan.

◧◩◪
3. zug_zu+FO[view] [source] 2023-11-20 01:01:04
>>silenc+DN
They should define it, sure. Here's what I'd expect this means:

- Not limiting access to a universally profitable technology by making it only accessible to highest bidder (e.g. hire our virtual assistants for 30k a year).

- Making models with a mind to all threats (existential, job replacement, scam uses)

- Potentially open-sourcing models that are deemed safe

So far I genuinely believe they are doing the first two and leaving billions on the table they could get by jacking their price 10x or more.

◧◩◪◨
4. to11mt+DS[view] [source] 2023-11-20 01:28:14
>>zug_zu+FO
That's kinda weasel-y in itself.

If a model is not safe, the access should be limited in general.

Or, from a business model perspective; a 'sane' nonprofit doing what OpenAI should, at least in my mind, be able to do the following harmoniously:

1. Release new models that do the same thing they make others allow access do to via their 'products' with reasonable instructions on how to run them on-prem (i.e. I'm not saying what they do has to be fully runnable on a single local box, but it should be reproducible as a nonprofit purportedly geared towards research.)

2. Provide on-line access to models with a cost model that lets others use while furthering the foundation.

3. Provides enough overall value in what they do that outside parties invest regardless of whether they are guaranteed a specific individual return.

4. Not allow potentially unsafe models to be available via less than both research branches.

Perhaps, however, I am too idealistic.

On the other hand, Point 4 is important, because we can never know under the current model, whether a previous unsafe model has been truly 'patched' for all variations of a model.

OTOH, if a given model would violate Point 4, I do not trust the current org to properly disclose the found gaps; better to quietly patch the UI and intermediate layers than ask whether a fix can be worked around with different wording.

[go to top]