zlacker

[parent] [thread] 3 comments
1. pests+(OP)[view] [source] 2023-11-19 23:15:48
Is AI/AGI safety the same as openness?
replies(3): >>rvnx+l >>_heimd+R1 >>INGSOC+kc
2. rvnx+l[view] [source] 2023-11-19 23:18:25
>>pests+(OP)
According to OpenAI investment paperwork:

It would be wise to view any investment in OpenAI Global, LLC in the spirit of a donation. The Company exists to advance OpenAl, Inc.'s mission of ensuring that safe artificial general intelligence is developed and benefits all of humanity. The Company's duty to this mission and the principles advanced in the OpenAI, Inc. Charter take precedence over any obligation to generate a profit. The Company may never make a profit, and the Company is under no obligation to do so. The Company is free to re-invest any or all of the Company's cash flow into research and development activities and/or related expenses without any obligation to the Members.

I guess safe artificial general intelligence is developed and benefits all of humanity, means an open AI (hence the name) and safe AI.

3. _heimd+R1[view] [source] 2023-11-19 23:27:53
>>pests+(OP)
No, though I think OpenAI at least wants to achieve both.

Whether we can actually safely develop AI or AGI is a much tougher question than whether that's the intent, unfortunately.

4. INGSOC+kc[view] [source] 2023-11-20 00:29:32
>>pests+(OP)
no. it's anti-openness. the true value in ai/agi is the ability to control the output. the "safe" part of this is controlling the political slant that "open" ai models allow. the technology itself has much less value than the control that is possible to those who decide what is "safe" and what isn't. it's akin to raiding the libraries and removing any book or idea or reference to historical event that isn't culturally popular.

this is the future that orwell feared.

[go to top]