zlacker

[parent] [thread] 16 comments
1. achron+(OP)[view] [source] 2023-11-22 11:29:14
No, if they had vastly different information, and if it was on the right side of their own stated purpose & values, they would have behaved very differently. This kind of equivocation hinders the way more important questions such as: just what the heck is Larry Summers doing on that board?
replies(9): >>vasco+a1 >>dontup+M2 >>cyanyd+a3 >>hobofa+h3 >>shmatt+a4 >>383210+F5 >>T-A+ca >>mrangl+la >>Burnin+ni
2. vasco+a1[view] [source] 2023-11-22 11:39:53
>>achron+(OP)
I think this is a good question. One should look at what actually happened in practice. What was the previous board, what is the current board. For the leadership team, what are the changes? Additionally, was information revealed about who calls the shots which can inform who will drive future decisions? Anything else about the inbetweens to me is smoke and mirrors.
3. dontup+M2[view] [source] 2023-11-22 11:53:51
>>achron+(OP)
>just what the heck is Larry Summers doing on that board?

1. Did you really think the feds wouldn't be involved?

AI is part of the next geopolitical cold war/realpolitik of nation-states. Up until now it's just been passively collecting and spying on data. And yes they absolutely will be using it in the military, probably after Israel or some other western-aligned nation gives it a test run.

2. Considering how much impact it will have on the entire economy by being able to put many white collar workers out of work, a seasoned economist makes sense.

The East Coast runs the joint. The west coast just does the (publicly) facing tech stuff and takes the heat from the public

replies(2): >>chucke+m7 >>jddj+u7
4. cyanyd+a3[view] [source] 2023-11-22 11:56:35
>>achron+(OP)
I assume larry summers is there to ensure the proper bi-partisan choices made by whats clearly now an _business_ product and not a product for humanity.

Which is utterly scary.

5. hobofa+h3[view] [source] 2023-11-22 11:57:05
>>achron+(OP)
> of their own stated purpose & values

You mean the official stated purpose of OpenAI. The stated purpose that is constantly contradicted by many of their actions, and I think nobody took seriously anymore for years.

From everything I can tell the people working at OpenAI have always cared more about advancing the space and building great products than "openeness" and "safe AGI". The official values of OpenAI were never "their own".

replies(2): >>Wander+J5 >>bnralt+ib
6. shmatt+a4[view] [source] 2023-11-22 12:02:57
>>achron+(OP)
He’s a white male replacing a female board member. Which is probably what they wanted all along
replies(1): >>dbspin+D4
◧◩
7. dbspin+D4[view] [source] [discussion] 2023-11-22 12:05:52
>>shmatt+a4
Yes, the patriarchy collectively breathed a sigh of relief as one of our agents was inserted to prevent any threat from the other side.
8. 383210+F5[view] [source] 2023-11-22 12:14:31
>>achron+(OP)
> just what the heck is Larry Summers doing on that board?

Probably precisely what Condeleeza Rice was doing on DropBox’s board. Or that board filled with national security state heavyweights on that “visionary” and her blood testing thingie.

https://www.wired.com/2014/04/dropbox-rice-controversy/

https://en.wikipedia.org/wiki/Theranos#Management

In other possibly related news: https://nitter.net/elonmusk/status/1726408333781774393#m

“What matters now is the way forward, as the DoD has a critical unmet need to bring the power of cloud and AI to our men and women in uniform, modernizing technology infrastructure and platform services technology. We stand ready to support the DoD as they work through their next steps and its new cloud computing solicitation plans.” (2021)

https://blogs.microsoft.com/blog/2021/07/06/microsofts-commi...

◧◩
9. Wander+J5[view] [source] [discussion] 2023-11-22 12:14:53
>>hobofa+h3
“never” is a strong word. I believe in the RL era of OpenAI they were quite aligned with the mission/values
◧◩
10. chucke+m7[view] [source] [discussion] 2023-11-22 12:26:03
>>dontup+M2
Yeah, I think Larry there is because ChatGPT has become too important for USA.
◧◩
11. jddj+u7[view] [source] [discussion] 2023-11-22 12:26:44
>>dontup+M2
The timing of the semiconductor export controls being another datapoint here in support of #1.

Not that it's really in need of additional evidence.

12. T-A+ca[view] [source] 2023-11-22 12:48:35
>>achron+(OP)
> what the heck is Larry Summers doing on that board?

The former president of a research-oriented nonprofit (Harvard U) controlling a revenue-generating entity (Harvard Management Co) worth tens of billions, ousted for harboring views considered harmful by a dominant ideological faction of his constituency? I guess he's expected to have learned a thing or two from that.

And as an economist with a stint of heading the treasury under his belt, he's presumably expected to be able to address the less apocalyptic fears surrounding AI.

13. mrangl+la[view] [source] 2023-11-22 12:49:25
>>achron+(OP)
Said purpose and values are nothing more than an attempted control lever for dark actors, very obviously. People / factions that gain handholds, which otherwise wouldn't exist, and exert control through social pressure nonsense that they don't believe in themselves. As can be extracted from modern street-brawl politics, which utilizes the same terminology to the same effect. And as can be inferred would be the case given OAI's novel and convoluted corporate structure as referenced to the importance of its tech.

We just witnessed the war for that power play out, partially. But don't worry, see next. Nothing is opaque about the appointment of Larry Summers. Very obviously, he's the government's seat on the board (see 'dark actors', now a little more into the light). Which is why I noted that the power competition only played out, partially. Altman is now unfireable, at least at this stage, and yet it would be irrational to think that this strategic mistake would inspire the most powerful actor to release its grip. The handhold has only been adjusted.

◧◩
14. bnralt+ib[view] [source] [discussion] 2023-11-22 12:57:06
>>hobofa+h3
> From everything I can tell the people working at OpenAI have always cared more about advancing the space and building great products than "openeness" and "safe AGI".

Board member Helen Toner strongly criticized OpenAI for publicly releasing it's GPT when it did and not keeping it closed for longer. That would seem to be working against openness for many people, but others would see it as working towards safe AI.

The thing is, people have radically different ideas about what openness and safe mean. There's a lot of talk about whether or not OpenAI stuck with it's stated purpose, but there's no consensus on what that purpose actually means in practice.

15. Burnin+ni[view] [source] 2023-11-22 13:42:02
>>achron+(OP)
Larry Summers is everywhere and does everything.
replies(1): >>Turing+Sk
◧◩
16. Turing+Sk[view] [source] [discussion] 2023-11-22 13:53:54
>>Burnin+ni
At the same time?
replies(1): >>marcos+zK
◧◩◪
17. marcos+zK[view] [source] [discussion] 2023-11-22 15:42:57
>>Turing+Sk
All at once.
[go to top]