zlacker

[parent] [thread] 8 comments
1. zeroha+(OP)[view] [source] 2023-11-22 19:05:51
Google, Meta and now OpenAI. So long, responsible and safety AI guardrails. Hello, big money.

Disappointed by the outcome, but perhaps mission-driven AI development -- the reason OpenAI was founded -- was never possible.

Edit: I applaud the board members for (apparently, it seems) trying to stand up for the mission (aka doing the job that they were put on the board to do), even if their efforts were doomed.

replies(2): >>risho+V1 >>pauldd+X2
2. risho+V1[view] [source] 2023-11-22 19:15:26
>>zeroha+(OP)
you just don't understand how markets work. if openai slows down then they will just be driven out by competition. that's fine if that's what you think they should do, but that won't make ai any safer, it will just kill openai and have them replaced by someone else.
replies(2): >>zeroha+Z2 >>Wander+j3
3. pauldd+X2[view] [source] 2023-11-22 19:19:51
>>zeroha+(OP)
> I applaud the board members for (apparently, it seems) trying to stand up for the mission

What about this is apparent to you?

What statement has the board made on how they fired Altman "for the mission"?

Have I missed something?

replies(1): >>alsetm+n5
◧◩
4. zeroha+Z2[view] [source] [discussion] 2023-11-22 19:20:02
>>risho+V1
you're right about market forces, however:

1) openAI was explicitly founded to NOT develop AI based on "market forces"; it's just that they "pivoted" (aka abandoned their mission) once they struck gold in order to become driven by the market

2) this is exactly the reasoning behind nuclear arms races

◧◩
5. Wander+j3[view] [source] [discussion] 2023-11-22 19:21:48
>>risho+V1
You can still be a force for decentralization by creating actually open ai. For now it seems like Meta AI research is the real open ai
replies(1): >>insani+A5
◧◩
6. alsetm+n5[view] [source] [discussion] 2023-11-22 19:31:12
>>pauldd+X2
To me, commentary online and on podcasts universally leans on the idea that he appears to be very focused on money (from the outside) in seeming contradiction to the company charter:

> Our primary fiduciary duty is to humanity.

Also, the language of the charter has watered down a stronger commitment that was in the first version. Others have quoted it and I'm sure you can find it on the internet archive.

replies(1): >>pauldd+nb
◧◩◪
7. insani+A5[view] [source] [discussion] 2023-11-22 19:32:51
>>Wander+j3
What does "actually open" mean? And how is that more responsible? If the ethical concern of AI is that it's too powerful or whatever, isn't building it in the open worse?
replies(1): >>Wander+B6
◧◩◪◨
8. Wander+B6[view] [source] [discussion] 2023-11-22 19:38:24
>>insani+A5
Depends on how you interpret the mission statement of building ai for all of humanity. It’s questionable that humanity is better off if ai only accrues to one or a few centralised entities?
◧◩◪
9. pauldd+nb[view] [source] [discussion] 2023-11-22 19:59:49
>>alsetm+n5
> commentary online and on podcasts

:/

[go to top]