zlacker

[parent] [thread] 5 comments
1. silenc+(OP)[view] [source] 2023-11-20 00:53:20
> This is why, when you claim to be running a non-profit to "benefit humankind," you shouldn't put all your resources into a for-profit subsidiary.

To be frank, they need to really spell out what "benefitting mankind" is. How is it measured? Or is it measured? Or is it just "the board says this isn't doing that so it's not doing that"?

It's honestly a silly slogan.

replies(2): >>insani+w >>zug_zu+21
2. insani+w[view] [source] 2023-11-20 00:58:10
>>silenc+(OP)
> "the board says this isn't doing that so it's not doing that"?

I believe that is indeed the case, it is the responsibility of the board to make that call.

3. zug_zu+21[view] [source] 2023-11-20 01:01:04
>>silenc+(OP)
They should define it, sure. Here's what I'd expect this means:

- Not limiting access to a universally profitable technology by making it only accessible to highest bidder (e.g. hire our virtual assistants for 30k a year).

- Making models with a mind to all threats (existential, job replacement, scam uses)

- Potentially open-sourcing models that are deemed safe

So far I genuinely believe they are doing the first two and leaving billions on the table they could get by jacking their price 10x or more.

replies(2): >>mlyle+J3 >>to11mt+05
◧◩
4. mlyle+J3[view] [source] [discussion] 2023-11-20 01:19:42
>>zug_zu+21
If they jack the prices, they leave too wide a door for other entrants.

Right now, OpenAI mostly has a big cost advantage; fully exploiting that requires lower pricing and high volume.

replies(1): >>robren+O5
◧◩
5. to11mt+05[view] [source] [discussion] 2023-11-20 01:28:14
>>zug_zu+21
That's kinda weasel-y in itself.

If a model is not safe, the access should be limited in general.

Or, from a business model perspective; a 'sane' nonprofit doing what OpenAI should, at least in my mind, be able to do the following harmoniously:

1. Release new models that do the same thing they make others allow access do to via their 'products' with reasonable instructions on how to run them on-prem (i.e. I'm not saying what they do has to be fully runnable on a single local box, but it should be reproducible as a nonprofit purportedly geared towards research.)

2. Provide on-line access to models with a cost model that lets others use while furthering the foundation.

3. Provides enough overall value in what they do that outside parties invest regardless of whether they are guaranteed a specific individual return.

4. Not allow potentially unsafe models to be available via less than both research branches.

Perhaps, however, I am too idealistic.

On the other hand, Point 4 is important, because we can never know under the current model, whether a previous unsafe model has been truly 'patched' for all variations of a model.

OTOH, if a given model would violate Point 4, I do not trust the current org to properly disclose the found gaps; better to quietly patch the UI and intermediate layers than ask whether a fix can be worked around with different wording.

◧◩◪
6. robren+O5[view] [source] [discussion] 2023-11-20 01:34:47
>>mlyle+J3
From my time working on search related problems at Google, this might be a bit of a winner take most market. If you have the users, your system can more effectively learn how to do a better job for the users. The interaction data generated is excludable gold, merely knowing how 100s of millions use chat bots is incredibly powerful, and if the company keeps being the clear and well known best, it's easy to stay the best, because the learning system has more high quality things to learn from.

While google did do a good job milking knowledge and improving from its queries and interaction data, openai surely knows how to get information from high quality textual data even better.

Openai made an interface where you can just speak your natural language, it didn't make you learn it's own pool of keyword jargon bastardized quasi command language. It's way more natural.

[go to top]