zlacker

[parent] [thread] 16 comments
1. fsloth+(OP)[view] [source] 2023-11-18 07:33:40
Quite possible actually, this seems to become a really hot political potato with at least 3 types of ambition running it 1. Business 2. Regulatory 3. ’Religious/Academic’. By latter I mean the divide between ai doomerists and others is caused by insubstantiable dogma (doom/nirvana).
replies(2): >>waihti+91 >>concor+Nf
2. waihti+91[view] [source] 2023-11-18 07:46:03
>>fsloth+(OP)
this is why you don't bring NGO types into your board, and you especially don't give them power to oust you.
replies(2): >>mrmann+L3 >>CPLX+Tp
◧◩
3. mrmann+L3[view] [source] [discussion] 2023-11-18 08:08:44
>>waihti+91
> this is why you don't bring NGO types into your board

OpenAI is an NGO…?

replies(1): >>glompe+V8
◧◩◪
4. glompe+V8[view] [source] [discussion] 2023-11-18 08:56:05
>>mrmann+L3
That is neither stated nor implied, unless you’re simply making the objection, “But OpenAI _is_ nongovernmental.”

Most readers are aware they were a research and advocacy organization that became (in the sense that public benefit tax-free nonprofit groups and charitable foundations normally have no possibility of granting anyone equity ownership nor exclusive rights to their production) a corporation by creating one; but some of the board members are implied by the parent comment to be from NGO-type backgrounds.

replies(2): >>emn13+ai >>mrmann+tn2
5. concor+Nf[view] [source] 2023-11-18 09:57:07
>>fsloth+(OP)
> insubstantiable dogma (doom/nirvana)

What do you mean by this? Looks like you're just throwing out a diss on the doomer position (most doomers don't think near future LLMs are concerning).

replies(1): >>fsloth+nz
◧◩◪◨
6. emn13+ai[view] [source] [discussion] 2023-11-18 10:15:47
>>glompe+V8
I'm not sure I understand what you're saying. Perhaps you could point out where your perspective differs from mine? So, as I see it: Open AI _is_ a non-profit, though it has an LLC it wholly controls that doesn't have non-profit status. It never "became" for-profit (IANAL, but is that even possible? It seems like that should not be possible), the only thing that happened is that the LLC was allowed to collect some "profit" - but that in turn would go to its owners, primarily the non-profit. As far as I'm aware the board in question that went through this purge _was_ the non-profit's board (does the LLC have a board?)

From the non-profit's perspective, it sounds pretty reasonable to self-police and ensure there aren't any rogue parts of the organization that are going off and working at odds with the overall non-profit's formal aims. It's always been weird that the Open-AI LLC seemed to be so commercially focused even when that might conflict with it's sole controller's interests; notably the LLC very explicitly warned investors that the NGO's mission took precedence over profit.

◧◩
7. CPLX+Tp[view] [source] [discussion] 2023-11-18 11:18:55
>>waihti+91
What does “your” board mean in this context? Who’s “your”?

The CEO just works for the organization and the board is their boss.

You’re referencing a founder situation where the CEO is also a founder who also has equity and thus the board also reports to them.

This isn’t that. Altman didn’t own anything, it’s not his company, it’s a non-profit. He just works there. He got fired.

replies(1): >>waihti+9w
◧◩◪
8. waihti+9w[view] [source] [discussion] 2023-11-18 12:04:08
>>CPLX+Tp
I believe altman had some ownership, however it is a general lesson of handing over substantial power to laymen who are completely detached from the actual ops & know-how of the company
replies(2): >>airstr+Sx >>CPLX+fA
◧◩◪◨
9. airstr+Sx[view] [source] [discussion] 2023-11-18 12:17:13
>>waihti+9w
nobody handed over power. presumably they were appointed to the board to do exactly what they did (if this theory holds), in which cass this outcome would be a feature not a bug
◧◩
10. fsloth+nz[view] [source] [discussion] 2023-11-18 12:25:36
>>concor+Nf
Neither AI fears nor singularity is substantiated. Hence the discussion is a matter of taste and opinion, not of facts. They are sunstantiated once one or the other comes to fruition. The fact it's a matter of taste and opinion makes the discussion only so much heated.
replies(2): >>JohnFe+eI >>concor+eP
◧◩◪◨
11. CPLX+fA[view] [source] [discussion] 2023-11-18 12:31:40
>>waihti+9w
There’s no such thing as owning a non-profit.
◧◩◪
12. JohnFe+eI[view] [source] [discussion] 2023-11-18 13:27:16
>>fsloth+nz
In my opinion, if either extreme turns out to be correct it will be a disaster for everyone on the planet. I also think that neither extreme is correct.
◧◩◪
13. concor+eP[view] [source] [discussion] 2023-11-18 14:08:06
>>fsloth+nz
Wouldn't this put AI doomerism in the same category as nuclear war doomerism? E.g. a thing that many experts think logically could happen and would be very bad but hasn't happened yet?
replies(2): >>jknoep+TZ >>fsloth+s91
◧◩◪◨
14. jknoep+TZ[view] [source] [discussion] 2023-11-18 15:07:55
>>concor+eP
I'm unaware of an empirical demonstration of the feasibility of the singularity hypothesis. Annihilation by nuclear or biological warfare on the other hand, we have ample empirical pretext for.

We have ample empirical pretext to worry about things like AI ethics, automated trading going off the rails and causing major market disruptions, transparency around use of algorithms in legal/medical/financial/etc. decision-making, oligopolies on AI resources, etc.... those are demonstrably real, but also obviously very different in kind from generalized AI doomsday.

◧◩◪◨
15. fsloth+s91[view] [source] [discussion] 2023-11-18 16:06:42
>>concor+eP
That’s an excellent example why AI doomerism is bogus completely unlike nuclear war fears weren’t.

Nuclear war had very simple mechanistic concept behind it.

Both sides develop nukes (proven tech), put them on ballistic missiles (proven tech). Something goes politically sideways and things escalate (just like in WW1). Firepower levels cities and results in tens of millions dead (just like in WW2, again proven).

Nuclear war experts were actually experts in a system whose outcome you could compute to a very high degree.

There is no mechanistic model behind AI doom scenarios. There is no expert logically proposing a specific extinction scenario.

You can already trivially load up a car with explosives, drive it to a nearby large building, and cause massive damages and injury.

Yes, it’s plausible a lone genious could manufacture something horrible in their garage and let rip. But this is in the domain of ’fictional whatifs’.

Nobody factors in the fact that in the presence of such a high quality AI ecosystem the opposing force probably has AI systems of their own to help counter the threat (megaplague? Quickly synthesize megavaxine and just print it out at your local healt centers biofab. Megabomb? Possible even today but that’s why stuff like Uranium is tightly controlled. Etc etc). I hope everyone realizes all the latter examples are fictional fearmongering wihtout any basis in known cases.

AI would be such a boom for whole of humanity that shackling it in is absolutely silly. That said there is no evidende of a deus ex machina happy ending either. My position is let researchers research and once something substantial turns out, then engage policy wonks, once solid mechanistic principles can be referred to.

replies(1): >>concor+ui2
◧◩◪◨⬒
16. concor+ui2[view] [source] [discussion] 2023-11-18 22:36:12
>>fsloth+s91
> There is no mechanistic model behind AI doom scenarios. There is no expert logically proposing a specific extinction scenario.

You don't seem actually familiar with doomer talking points. The classic metaphor is that you might not be able to say how specifically Magnus Carlson will beat you at chess if you start the game with him down a pawn while nonetheless knowing he probably will. Predicting

The main way doomers think ASI might kill everyone is mostly via the medium of communicating with people and convincing them to do things, mostly seemingly harmless or sensible things.

It's also worth noting that doomers are not (normally) concerned about LLMs (at least, any in the pipeline), they're concerned about:

* the fact we don't know how to ensure any intelligence we construct actually shares our goals in a manner that will persist outside the training domain (this actually also applies to humans funnily enough, you can try instilling values into them with school or parenting but despite them sharing our mind design they still do unintended things...). And indeed, optimization processes (such as evolution) have produced optimization processes (such as human cultures) that don't share the original one's "goals" (hence the invention of contraception and almost every developed country having below replacement fertility).

* the fact that recent history has had the smartest creature (the humans) taking almost complete control of the biosphere with the less intelligent creatures living or dying on the whims of the smarter ones.

◧◩◪◨
17. mrmann+tn2[view] [source] [discussion] 2023-11-18 23:05:04
>>glompe+V8
My objection is that OpenAI, at least to my knowledge, still is a non-profit organization that is not part of the government and has some kind of public benefit goals - that sounds like an NGO to me. Thus appointing “NGO types” to the board sounds reasonable: They have experience running that kind of organization.

Many NGOs run limited liability companies and for-profit businesses as part of their operations, that’s in no way unique for OpenAI. Girl Scout cookies are an example.

[go to top]