zlacker

[parent] [thread] 39 comments
1. stubis+(OP)[view] [source] 2024-03-02 01:00:33
So much of the discussion here is about being a non-profit, but per your quote I think the key is open source. Here we have people investing in an open source company, and the company never opened their source. Rather than open source technology everyone could profit from, they kept everything closed and sold exclusive access. I think it is going to be hard for OpenAI to defend their behavior, and a huge amount of damages to be claimed for all the money investors had to spend catching up.
replies(4): >>tracer+S >>richar+85 >>HarHar+Bh1 >>orena+as2
2. tracer+S[view] [source] 2024-03-02 01:11:15
>>stubis+(OP)
It says "will seek to open source technology for the public benefit when applicable" they have open sourced a number of things, Whisper most notably. Nothing about that is a promise to open source everything and they just need to say it wasn't applicable for ChatGPT or DallE because of safety.
replies(3): >>stubis+C4 >>thayne+j8 >>canjob+Uj
◧◩
3. stubis+C4[view] [source] [discussion] 2024-03-02 01:53:19
>>tracer+S
I doubt the safety argument will hold up in court. Anything safe enough to allow Microsoft or others access too would be safe enough to release publicly. Our AI overlords are not going to respect an NDA. And for the public safety/disinformation side of things, I think it is safe to say that cat is out of the bag and chasing the horse that has bolted.
replies(4): >>sanxiy+a6 >>bandya+5b >>tintor+Ke >>WhatIs+CZ
4. richar+85[view] [source] 2024-03-02 01:59:56
>>stubis+(OP)
I might be too generous, but my interpretation is that the ground changed so fast that they needed to shift to continue the mission given the new reality. After ChatGPT, every for-profit and its dog is going hard. Talent can join the only Mother Teresa in the middle, or compete with them as they stupidly open all the source the second they discover anything. You can’t compete with the biggest labs in the world who have infinite GPU, with selfless open sourcers running training on their home PC’s. And you need to be in the game to have any influence over the eventual direction. I’d still bet the goal is the same, but how it’s done has changed by necessity.
replies(5): >>ethbr1+u7 >>_heimd+Y7 >>bmitc+3n >>aorlof+Hp >>stubis+hr
◧◩◪
5. sanxiy+a6[view] [source] [discussion] 2024-03-02 02:11:17
>>stubis+C4
I am unsure. You can't (for example) fine tune over API. Is anything safe for Microsoft to fine tune really safe for Russia, CCP, etc. to fine tune? Open weight (which I think is more accurate term than open source here) models enable both much more actors and much more actions than the current status.
replies(1): >>pclmul+9d
◧◩
6. ethbr1+u7[view] [source] [discussion] 2024-03-02 02:29:35
>>richar+85
> And you need to be in the game to have any influence over the eventual direction

Effective altruism, eh?

replies(1): >>richar+fh
◧◩
7. _heimd+Y7[view] [source] [discussion] 2024-03-02 02:35:47
>>richar+85
Unless the charter leaves room for such a drastic pivot, I'm not sure how well this would hold up. Whether the original charter is binding is up for lawyers to debate, but as written it seems to spell out the mission clearly and with little wiggle room for interpretation. Maybe they could go after the definition of when open sourcing would benefit the public?
replies(1): >>lumost+ze
◧◩
8. thayne+j8[view] [source] [discussion] 2024-03-02 02:39:36
>>tracer+S
I think that position would be a lot more defensible if they weren't giving another for-profit company access to it. And there is definitely a conflict of interest when not revealing the source gives them a competitive advantage in selling their product. There's also the question of if the source is too dangerous to make public, how can they be sure the final product is safe? An argument could be made it isn't safe.
replies(1): >>thepti+ib
◧◩◪
9. bandya+5b[view] [source] [discussion] 2024-03-02 03:15:18
>>stubis+C4
If the above statement is the only “commitment” they’ve made to open-source, then that argument won’t need to be made in court. They just need to reference the vague language that basically leaves the door open to do anything they want.
◧◩◪
10. thepti+ib[view] [source] [discussion] 2024-03-02 03:18:49
>>thayne+j8
It’s easy to defend this position.

It is safer to operate an AI in a centralized service, because if you discover dangerous capabilities you can turn it off or mitigate them.

If you open-weight the model, if dangerous capabilities are later discovered there is no way to put the genie back in the bottle; the weights are out there, anyone can use them.

This of course applies to both mundane harms (eg generating deepfake porn of famous people) or existential risks (eg power-seeking behavior).

replies(2): >>Walter+om >>isaacf+vO
◧◩◪◨
11. pclmul+9d[view] [source] [discussion] 2024-03-02 03:44:47
>>sanxiy+a6
You can fine tune over the API. Also, Russia and the CCP likely have the model weights. They probably have spies in OpenAI or Microsoft with access to the weights.
replies(3): >>Wander+Qj >>HeavyS+ru >>petre+K71
◧◩◪
12. lumost+ze[view] [source] [discussion] 2024-03-02 04:05:28
>>_heimd+Y7
Other possibility is that they claim they spent the non-profut funds prior to going for-profit? It would be dubious to claim damages if the entity was effectively bankrupt prior to for profit creation.
replies(1): >>_heimd+7g
◧◩◪
13. tintor+Ke[view] [source] [discussion] 2024-03-02 04:07:40
>>stubis+C4
`Anything safe enough to allow Microsoft or others access too would be safe enough to release publicly.`

This makes absolutely no sense.

replies(1): >>rl3+th
◧◩◪◨
14. _heimd+7g[view] [source] [discussion] 2024-03-02 04:28:55
>>lumost+ze
Wouldn't that require notification to all interested parties of the nonprofit since its effectively killing off the nonprofit and starting a new entity?
◧◩◪
15. richar+fh[view] [source] [discussion] 2024-03-02 04:39:15
>>ethbr1+u7
No idea, don’t know what they stand for. This is logic. What do you do if you’re Sam Altman and ChatGPT has blown up like it has, and demands resources just to run the GPU’s. What is his next move? It’s not business as usual.

The risk is that he’s too confident and screws it up. Or continues on the growth path and becomes the person everyone seems to accuse him of being. But I think he’s not interested in petty shit, scratching around for a few bucks. Why, when you can (try) save the world.

replies(1): >>ethbr1+Rq
◧◩◪◨
16. rl3+th[view] [source] [discussion] 2024-03-02 04:42:51
>>tintor+Ke
>This makes absolutely no sense.

>>34716375

What about now?

replies(1): >>cutemo+nJ1
◧◩◪◨⬒
17. Wander+Qj[view] [source] [discussion] 2024-03-02 05:11:59
>>pclmul+9d
Interesting thought experiment! How would they best take advantage of the weights and what would be signs/actions that we could observe that signal it is likely they have the weights?
replies(1): >>simfre+bZ1
◧◩
18. canjob+Uj[view] [source] [discussion] 2024-03-02 05:13:46
>>tracer+S
The "when applicable" gets them out of nearly anything.
◧◩◪◨
19. Walter+om[view] [source] [discussion] 2024-03-02 05:53:04
>>thepti+ib
This was all obvious >before< they wrote the charter.
replies(1): >>thepti+8F1
◧◩
20. bmitc+3n[view] [source] [discussion] 2024-03-02 06:01:08
>>richar+85
All of this was intentional. The goal the whole time was to eventually pull the rug out from the non-profit. This should be considered fraud.
replies(1): >>sumitk+p53
◧◩
21. aorlof+Hp[view] [source] [discussion] 2024-03-02 06:36:13
>>richar+85
These are things to consider when you decide to build a company using a novel charter instead of a standard startup
◧◩◪◨
22. ethbr1+Rq[view] [source] [discussion] 2024-03-02 06:49:55
>>richar+fh
Money for resources to run ChatGPT is the tail wagging the dog, though.

If you need money to run the publicly released thing you underpriced to seize market share...

... you could also just, not?

And stick to research and releasing results.

At what point does it stop being "necessary" for OpenAI to do bad things to stay competitive and start being about them just running the standard VC playbook underneath a non-profit umbrella?

◧◩
23. stubis+hr[view] [source] [discussion] 2024-03-02 06:54:21
>>richar+85
> After ChatGPT, every for-profit and its dog is going hard.

After ChatGPT was not released to the public, every for-profit raced to reproduce and improve on it. The decision not to release early and often with a restrictive license helped create that competition for funds and talent. If the company had been truly open, competition would have either had the choice of moving quickly, spending less money and contributing to the common core, or spending more money, going slower as they clean room implement the open code they can't use, and trying to compete alone. This might have been a huge win for the open source model, making the profitable decision to be to contribute to the commons.

◧◩◪◨⬒
24. HeavyS+ru[view] [source] [discussion] 2024-03-02 07:29:52
>>pclmul+9d
I don't think such speculation would _hold in court_
replies(1): >>pclmul+xb1
◧◩◪◨
25. isaacf+vO[view] [source] [discussion] 2024-03-02 11:55:05
>>thepti+ib
Open source would also mean it is available to sanctioned countries like china.
◧◩◪
26. WhatIs+CZ[view] [source] [discussion] 2024-03-02 14:06:07
>>stubis+C4
https://arxiv.org/abs/2311.03348

This seems to make a decent argument that these models are potentially not safe. I prefer criminals don't have access to a PhD bomb making assistants who can explain the process to them like they are 12. While the cat may be out of the bag, you don't just hand out guns to everyone (for free) because a few people misused them.

◧◩◪◨⬒
27. petre+K71[view] [source] [discussion] 2024-03-02 15:23:48
>>pclmul+9d
They'll train it on Xi Jingping Thought so that the people of China can move on with their lives and use the Xi bot instead of wasting precious man hours actually studying the texts.

The Russians will obviously use it to spread Kremlin's narratives on the Internet in all languages, including Klingon and Elvish.

◧◩◪◨⬒⬓
28. pclmul+xb1[view] [source] [discussion] 2024-03-02 15:56:17
>>HeavyS+ru
A quick Google search has confirmed that Microsoft has confirmed at least the Russia part:

https://www.cyberark.com/resources/blog/apt29s-attack-on-mic...

It's very hard to argue that when you give 100,000 people access to materials that are inherently worth billions, none of them are stealing those materials. Google has enough leakers to conservative media of all places that you should suspect that at least one Googler is exfiltrating data to China, Russia, or India.

29. HarHar+Bh1[view] [source] 2024-03-02 16:46:30
>>stubis+(OP)
> huge amount of damages to be claimed for all the money investors had to spend catching up

Huh? There's no secret to building these LLM-based "AI"s - they all use the same "transformer" architecture that was published by Google. You can find step-by-step YouTube tutorials on how to build one yourself if you want to.

All that OpenAI did was build a series of progressively larger transformers, trained on progressively larger training sets, and document how the capabilities expanded as you scaled them up. Anyone paying attention could have done the same at any stage if they wanted to.

The expense of recreating what OpenAI have built isn't in having to recreate some secret architecture that OpenAI have kept secret. The expense is in obtaining the training data and training the model.

◧◩◪◨⬒
30. thepti+8F1[view] [source] [discussion] 2024-03-02 19:41:01
>>Walter+om
I don’t think this belief was widespread at all at that time.

Indeed, it’s not widespread even now, lots of folks round here are still confused by “open weight sounds like open source and we like open source”, and Elon is still charging towards fully open models.

(In general I think if you are more worried about a baby machine god owned and aligned by Meta than complete annihilation from unaligned ASI then you’ll prefer open weights no matter the theoretical risk.)

replies(1): >>Walter+kI7
◧◩◪◨⬒
31. cutemo+nJ1[view] [source] [discussion] 2024-03-02 20:22:21
>>rl3+th
Microsoft doesn't run troll farms trying to manipulate the voters to change the US to a dictatorship, or develop killer drone swarms or have nukes.

(Not saying OpenAI isn't greedy)

replies(2): >>salawa+QN1 >>rl3+i72
◧◩◪◨⬒⬓
32. salawa+QN1[view] [source] [discussion] 2024-03-02 21:02:09
>>cutemo+nJ1
...What OS do you think many of these places use? Linux is still niche af. In a real, tangible way, it may very well be the case that yes, Microsoft does, in fact, run them.
replies(1): >>Dylan1+2j2
◧◩◪◨⬒⬓
33. simfre+bZ1[view] [source] [discussion] 2024-03-02 22:42:05
>>Wander+Qj
We know Microsoft experienced a full breach of Office 365/Microsoft 365 and Azure infrastructure by a nation state actor: https://www.imprivata.com/blog/strengthening-security-5-less...
◧◩◪◨⬒⬓
34. rl3+i72[view] [source] [discussion] 2024-03-02 23:49:40
>>cutemo+nJ1
I think you make a good point. My argument was that Microsoft's security isn't that great, therefore the risk of the model ending up in the hands of the bad actors you mention isn't sufficiently low.
replies(1): >>cutemo+3K2
◧◩◪◨⬒⬓⬔
35. Dylan1+2j2[view] [source] [discussion] 2024-03-03 01:50:23
>>salawa+QN1
The "..." is not warranted because that is clearly not the sense of "run" they were talking about.
36. orena+as2[view] [source] 2024-03-03 03:40:59
>>stubis+(OP)
They publish the source code for Whisper...
◧◩◪◨⬒⬓⬔
37. cutemo+3K2[view] [source] [discussion] 2024-03-03 08:12:43
>>rl3+i72
Aha, ok thanks for explaining
◧◩◪
38. sumitk+p53[view] [source] [discussion] 2024-03-03 13:20:22
>>bmitc+3n
The original charter is nothing more than a marketing copy. And companies are legally allowed to change their marketing copy over time and are not bound to stick to it in behavior. The marketing was for the investors and they should be the first to know that such promises are subject to how reality unfolds. In other words a team can raise money by promising milestones but they are allowed to pivot the whole business and not just abandon milestones if the reality of the business demands it.
replies(1): >>bmitc+yt3
◧◩◪◨
39. bmitc+yt3[view] [source] [discussion] 2024-03-03 17:09:04
>>sumitk+p53
That is not true for non-profits. The charter is legally binding.
◧◩◪◨⬒⬓
40. Walter+kI7[view] [source] [discussion] 2024-03-05 01:41:36
>>thepti+8F1
IMHE, it's been part of widespread discussions in the AI research and AI safety communities since the 2000s.
[go to top]