zlacker

[return to "We have reached an agreement in principle for Sam to return to OpenAI as CEO"]
1. ugh123+ls[view] [source] 2023-11-22 09:29:12
>>staran+(OP)
In light of this weekend's events, and the more i've learned about OpenAI's beginnings and purpose, I now believe that there isn't necessarily a "for profit" motivation of the company, but merely that the original intention to create AI that "benefits humanity" is in full play now through a commercialized ChatGPT, and possibly further leveraged through "GPTs" and their evolution.

Is this the "path" to AGI? Who knows! But it is a path to benefitting humanity as probably Sam and his camp see it. Does Ilya have a different plan? If he does, he has a lot of catching up to do while the current productization of ChatGPT and GPTs continue marching forward. Maybe he sees a great leap forward in accuracy in GPT-5 or later. Or maybe he feels LLMs aren't the answer and theres a completely new paradigm on the horizon. Regardless, they still need to answer to the fact that both research and product need funds to buy and power GPUs, and also satisfy the MSFT partnership. Commercialization is their only clear answer to that right now. Future investments will likely not stray from this approach, else they'll fund rivals who are more commercially motivated. Thats business.

Thus, i'm all in on this commercially motivated humanity benefitting GPT product. Let the market take OpenAI LLMs to where they need/want it to. Exciting things may follow!

◧◩
2. tkgall+hw[view] [source] 2023-11-22 10:03:02
>>ugh123+ls
In addition to commercialization providing money for AI development, isn't there also the argument that prudent commercialization is the best way to test the models for possible dangers? I think I saw Mira Murati take that position in an interview. In other words, creating a product that people want to use so much that they are willing to pay for it is a good way to stress-test the product.

I don't know if I agree, but the argument did make me think.

◧◩◪
3. kuchen+wd1[view] [source] 2023-11-22 14:47:59
>>tkgall+hw
Additionally, when you have a pre-release product that has largely passed small and artificial tests, you get diminishing returns on continued testing.

Eventually you need to expand, despite some risk, to push the testing forward.

Everyone has a different opinion on what level of safety AI should reach before it's released. "Makes no mistakes" and "never says something mean" are not attainable goals vs "reduce the rate of hallucinations, as defined by x, to <0.5% of total respinses" and "given a set of known and imagined scenarios, new Model continues to have a zero false-negative rate".

When it's an engineering problem we're trying to solve, we can mqke progress, but no company can avoid all forms of harm as defined by everyone.

[go to top]