zlacker

[parent] [thread] 15 comments
1. nostra+(OP)[view] [source] 2023-11-18 02:31:06
Yeah, this is more abrupt and more direct than any CEO firing I've ever seen. For comparison, when Travis Kalanick was ousted from Uber in 2017, he "resigned" and then was able to stay on the board until 2019. When Equifax had their data breach, it took 4 days for the CEO to resign and then the board retroactively changed it to "fired for cause". With the Volkswagon emissions scandal, it took 20 days for the CEO to resign (again, not fired) despite the threat of criminal proceedings.

You don't fire your CEO and call him a liar if you have any choice about it. That just invites a lawsuit, bad blood, and a poor reputation in the very small circles of corporate executives and board members.

That makes me think that Sam did something on OpenAI's behalf that could be construed as criminal, and the board had to fire him immediately and disavow all knowledge ("not completely candid") so that they don't bear any legal liability. It also fits with the new CEO being the person previously in charge of safety, governance, ethics, etc.

That Greg Brockman, Eric Schmidt, et al are defending Altman makes me think that this is in a legal grey area, something new, and it was on behalf of training better models. Something that an ends-justifies-the-means technologist could look at and think "Of course, why aren't we doing that?" while a layperson would be like "I can't believe you did that." It's probably not something mundane like copyright infringement or webscraping or even GDPR/CalOppa violations though - those are civil penalties, and wouldn't make the board panic as strongly as they did.

replies(4): >>jonas_+C >>wly_cd+C1 >>userna+rF >>watwut+cJ
2. jonas_+C[view] [source] 2023-11-18 02:36:35
>>nostra+(OP)
what examples are you considering here, bioweapons?
replies(3): >>lucubr+93 >>hyperc+54 >>VirusN+wx
3. wly_cd+C1[view] [source] 2023-11-18 02:43:11
>>nostra+(OP)
Human cloning
replies(1): >>koolba+D2
◧◩
4. koolba+D2[view] [source] [discussion] 2023-11-18 02:52:41
>>wly_cd+C1
Actual humans or is this a metaphor for replicating the personas of humans via an LLM?
◧◩
5. lucubr+93[view] [source] [discussion] 2023-11-18 02:55:46
>>jonas_+C
I don't think the person you are replying to is correct, because the only technological advancement where a new OpenAI artifact provides schematics that I think could qualify is Drexler-wins-Smalley-sucks style nanotechnology that could be used to build computation. That would be the sort of thing where if you're in favour of building the AI faster you're like "Why wouldn't we do this?" and if you're worried the AI may be trying to release a bioweapon to escape you're like "How could you even consider building to these schematics?".

I don't think it's correct not because it sounds like a sci-fi novel, but because I think it's unlikely that it's even remotely plausible that a new version of their internal AI system would be good enough at this point in time to do something like that, considering all the many problems that need to be solved for Drexler to be right.

I think it's much more likely that this was an ideological disagreement about safety in general rather than a given breakthrough or technology in specific, and Ilya got the backing of US NatSec types (apparently their representative on the board sided with him) to get Sam ousted.

replies(1): >>Apocry+35
◧◩
6. hyperc+54[view] [source] [discussion] 2023-11-18 03:02:48
>>jonas_+C
Well OpenAI gets really upset when you ask it to design a warp drive so maybe that was it.
◧◩◪
7. Apocry+35[view] [source] [discussion] 2023-11-18 03:10:31
>>lucubr+93
> I don't think it's correct not because it sounds like a sci-fi novel, but because I think it's unlikely that it's even remotely plausible that a new version of their internal AI system would be good enough at this point in time to do something like that

Aren't these synonymous at this point? The conceit that you can point AGI at any arbitrary speculative sci-fi concept and it can just invent it is a sci-fi trope.

replies(1): >>lucubr+Kj
◧◩◪◨
8. lucubr+Kj[view] [source] [discussion] 2023-11-18 04:56:13
>>Apocry+35
No, not really. Calling something "science-fiction" at the present moment is generally an insult intended to say something along the lines of "You're an idiot for believing this made up children's story could be real, it's like believing in fairies", which is of course a really dumb thing to say because science fiction has a very long history of predicting technological advances (the internet, tanks, video calls, not just phones but flip phones, submarines, television, the lunar landing, credit cards, aircraft, robotics, drones, tablets, bionic limbs, antidepressants) so the idea that because something is in science fiction it is therefore a stupid idea to think it is a real possibility for separate reasons is really, really dumb. It would also be dumb to think something is possible only because it exists in science fiction, like how many people think about faster than light travel, but science fiction is not why people believe AGI is possible.

Basically, there's a huge difference between "I don't think this is a feasible explanation for X event that just happened for specific technical reasons" (good) and "I don't think this is a possible explanation of X event that just happened because it has happened in science fiction stories, so it cannot be true" (dumb).

About nanotechnology specifically, if Drexler from Drexler-Smalley is right then an AGI would probably be able to invent it by definition. If Drexler is right that means it's in principle possible and just a matter of engineering, and an AGI (or a narrow superhuman AI at this task) by definition can do that engineering, with enough time and copies of itself.

replies(1): >>Apocry+El
◧◩◪◨⬒
9. Apocry+El[view] [source] [discussion] 2023-11-18 05:11:48
>>lucubr+Kj
How would a superhuman intelligence invent a new non-hypothetical actually-working device without actually conducting physical experiments, building prototypes, and so on? By conducting really rigorous meta-analysis of existing research papers? Every single example you listed involved work IRL.

> with enough time and copies of itself.

Alright, but that’s not what you the previous post was hypothesizing,which is that OpenAI was possibly able to do that without physical experimentation.

replies(1): >>lucubr+Dv
◧◩◪◨⬒⬓
10. lucubr+Dv[view] [source] [discussion] 2023-11-18 06:31:01
>>Apocry+El
Yes, the sort of challenges you're talking about are pretty much exactly why I don't consider it feasible that OpenAI has an internal system that is at that level yet. I would consider it to be at the reasonable limits of possibility that they could have an AI that could give a very convincing, detailed, & feasible "grant proposal" style plan for answering those questions, which wouldn't qualify for OPs comment.

With a more advanced AI system, one that could build better physics simulation environments, write software that's near maximally efficient, design better atomic modelling and tools than currently exist, and put all of that into thinking through a plan to achieve the technology (effectively experimenting inside its own head), I could maybe see it being possible for it to make it without needing the physical lab work. That level of cognitive achievement is what I think is infeasible that OpenAI could possibly have internally right now, for several reasons. Mainly that it's extremely far ahead of everything else to the point that I think they'd need recursive self-improvement to have gotten there, and I know for a fact there are many people at OpenAI who would rebel before letting a recursively self-improving AI get to that point. And two, if they lucked into something that capable accidentally by some freak accident, they wouldn't be able to keep it quiet for a few days, let alone a few weeks.

Basically, I don't think "a single technological advancement that product wants to implement and safety thinks is insane" is a good candidate for what caused the split, because there aren't that many such single technological advancements I can think of and all of them would require greater intelligence than I think is possible for OpenAI to have in an AI right now, even in their highest quality internal prototype.

replies(1): >>astran+cM2
◧◩
11. VirusN+wx[view] [source] [discussion] 2023-11-18 06:52:37
>>jonas_+C
promising not to train on microsoft's customer data, and then training on MSFT customer data.
12. userna+rF[view] [source] 2023-11-18 08:06:50
>>nostra+(OP)
You are comparing a corporate scandals, but the alternative theory in this forum seems to be a power struggle and power struggles have completely different mechanics.

Think of it as the difference between a vote of no confidence and a coup. In the first case you let things simmer for a bit to allow you to wheel and deal and to arrange for the future. In the second case, even in the case of a parliamentary coup like the 9th of Thermidor, the most important thing is to act fast.

replies(1): >>notaha+Xn1
13. watwut+cJ[view] [source] 2023-11-18 08:41:25
>>nostra+(OP)
Yeah, but Uber is completely different organization. The boards you mention were likely complic in stuff they kicked their CEOs out about.
◧◩
14. notaha+Xn1[view] [source] [discussion] 2023-11-18 13:48:28
>>userna+rF
A boardroom coup isn't remotely like one where one looks for the gap where the guards and guns aren't and worries about the deposed leader being reinstated by an angry mob.

If they had the small majority needed to get rid of him over mere differences of future vision they could have done so on whatever timescale they felt like, with no need to rush the departure and certainly no need for the goodbye to be inflammatory and potentially legally actionable

◧◩◪◨⬒⬓⬔
15. astran+cM2[view] [source] [discussion] 2023-11-18 21:49:23
>>lucubr+Dv
> With a more advanced AI system, one that could build better physics simulation environments, write software that's near maximally efficient, design better atomic modelling and tools than currently exist, and put all of that into thinking through a plan to achieve the technology (effectively experimenting inside its own head), I could maybe see it being possible for it to make it without needing the physical lab work.

It couldn't do any of that because it would cost money. The AGI wouldn't have money to do that because it doesn't have a job. It would need to get one to live, just like humans do, and then it wouldn't have time to take over the world, just like humans don't.

An artificial human-like superintelligence is incapable of being superhuman because it is constrained in all the same ways humans are and that isn't "they can't think fast enough".

replies(1): >>lucubr+Ud6
◧◩◪◨⬒⬓⬔⧯
16. lucubr+Ud6[view] [source] [discussion] 2023-11-19 21:17:11
>>astran+cM2
I think you're confused. We're talking about a hypothetical internal OpenAI prototype, and the specific example you listed is one I said wasn't feasible for the company to have right now. The money would come from the same budget that funds the rest of OpenAI's research.
[go to top]