zlacker

[parent] [thread] 216 comments
1. gkober+(OP)[view] [source] 2023-11-18 23:00:36
I'd bet money Satya was a driver of this reversal.

I genuinely can't believe the board didn't see this coming. I think they could have won in the court of public opinion if their press release said they loved Sam but felt like his skills and ambitions diverged from their mission. But instead, they tried to skewer him, and it backfired completely.

I hope Sam comes back. He'll make a lot more money if he doesn't, but I trust Sam a lot more than whomever they ultimately replace him with. I just hope that if he does come back, he doesn't use it as a chance to consolidate power – he's said in the past it's a good thing the board can fire him, and I hope he finds better board members rather than eschewing a board altogether.

EDIT: Yup, Satya is involved https://twitter.com/emilychangtv/status/1726025717077688662

replies(15): >>Jensso+J1 >>brigad+42 >>ren_en+a2 >>mycolo+n2 >>x0x0+83 >>codept+e3 >>px43+r3 >>jwnin+i7 >>locall+g8 >>felixg+h8 >>okdood+U8 >>jonpla+q9 >>Grimbl+Ca >>tptace+1b >>383210+zb
2. Jensso+J1[view] [source] 2023-11-18 23:07:02
>>gkober+(OP)
> I hope Sam comes back

Why? We would have more diversity in this space if he leaves, which would get us another AI startup with huge funding and know how from OpenAI, while OpenAI would become less Sam Altman like.

I think him staying is bad for the field overall compared to OpenAI splitting in two.

replies(12): >>janeje+u2 >>gkober+R2 >>tfehri+13 >>huevos+i3 >>peyton+f4 >>autaut+u4 >>skwirl+Z4 >>Meekro+96 >>naremu+J8 >>static+Q8 >>t_mann+qc >>toss1+zc
3. brigad+42[view] [source] 2023-11-18 23:08:30
>>gkober+(OP)
I bet it was multifaceted. By firing Sam this way they nuked their ability to raise funds because anyone investing in the "for profit" subsidiary would have to do so with the understanding that the non-profit could undermine them at a whim.

Also, all the employees are being paid with PPUs which is a share in future profits, and now they find out that actually, the company doesn't care about making profit!

A lot of top talent with internal know-how will be poached left and right. Many probably going to Sam's clone that he will raise billions for with a single call.

replies(2): >>caturo+j4 >>s3p+7Z1
4. ren_en+a2[view] [source] 2023-11-18 23:08:51
>>gkober+(OP)
everything about it screams amateur hour, from the language and timing of the press release to the fact they didn't notify Microsoft. And how they apparently completely failed to see how employees and customers would react to the news, Ilya saying the circumstances for Altman's removal "weren't ideal" shows how naive they were. They had no PR strategy to control the narrative and let rumors run wild

I doubt he returns, now he can start a for profit AI company, poach OpenAI's talent, and still look like the good guy in the situation. He was apparently already talking to Saudis to raise billions for an Nvidia competitor - >>38323939

Have to wonder how much this was contrived as a win-win, either OpenAI board does what he wants or he gets a free out to start his own company without looking like he's purely chasing money

replies(2): >>spacem+P2 >>RobPfe+X7
5. mycolo+n2[view] [source] 2023-11-18 23:09:34
>>gkober+(OP)
> I think they could have won in the court of public opinion ... [but] they tried to skewer him, and it backfired completely

Maybe we have different definitions of "the court of public opinion". Most people don't know who Sam Altman is, and most of the people who do know don't have strong opinions on his performance as OpenAI's CEO. Even on HN, the reaction to the board "skwer[ing] him" has been pretty mixed, and mostly one of confusion and waiting to see what else happens.

This quick a turnaround does make the board look bad, though.

replies(6): >>minima+Z2 >>kmlevi+b3 >>gkober+c3 >>eastbo+q3 >>rvba+cH1 >>bob_th+zN1
◧◩
6. janeje+u2[view] [source] [discussion] 2023-11-18 23:10:00
>>Jensso+J1
Honestly would be super interested to see what a hypothetical "SamAI" corp would look like, and what they would bring to the table. More competition, but also, probably with less ideological disagreements to distract them from building AI/AGI.
replies(2): >>apppli+d3 >>btown+j5
◧◩
7. spacem+P2[view] [source] [discussion] 2023-11-18 23:12:30
>>ren_en+a2
This story that they want him back turns it from amateur hour to peak clownshow.

This is why you need someone with business experience running an organization. Ilya et al might be brilliant scientists, but these folks are not equipped to deal with the nuances of managing a ship as heavily scrutinised as OpenAI

replies(2): >>ren_en+d5 >>hesdea+L5
◧◩
8. gkober+R2[view] [source] [discussion] 2023-11-18 23:12:32
>>Jensso+J1
Competition may be good for profit, but it's not good for safety. The balance between the two factions inside OpenAI is a feature, not a bug.
replies(5): >>himara+f3 >>coreth+24 >>Meekro+N5 >>ta988+T5 >>spacem+da
◧◩
9. minima+Z2[view] [source] [discussion] 2023-11-18 23:13:11
>>mycolo+n2
You underestimate how many people are aware of OpenAI after ChatGPT's viral success.

The news yesterday broke the tech/AI bubble, and there would have been much more press on it if it wasn't done as a Friday news dump.

replies(1): >>valian+R4
◧◩
10. tfehri+13[view] [source] [discussion] 2023-11-18 23:13:15
>>Jensso+J1
My main concern is that a new Altman-led AI company would be less safety-focused than OpenAI. I think him returning to OpenAI would be better for AI safety, hard to say whether it would be better for AI progress though.
replies(3): >>apalme+w3 >>noober+M3 >>silenc+q5
11. x0x0+83[view] [source] 2023-11-18 23:13:31
>>gkober+(OP)
1 - not running a move like this by the company that invested a reported $10 billion dollars;

2 - clearly not having spent even 10 seconds thinking about the (obvious) reaction of employees on learning the ceo of what seems like a generational company was fired out of the blue. Or the reaction to the (high likelihood) of a cofounder following him out the door

3 - And they didn't even carefully think through the reaction to the press release which hinted at some real wrongdoing by Altman.

3a - anyone want to bet if they even workshopped the press release with attorneys or just straight yolo'd it? No chance a thing like this could end up in court...

They've def got the A team running things... my god.

replies(1): >>chasd0+uy
◧◩
12. kmlevi+b3[view] [source] [discussion] 2023-11-18 23:13:50
>>mycolo+n2
What matters is what investors think, and by majority they seem very unhappy with all of this.

Speaking for myself, if they had framed this as a difference in vision, I would be willing to listen. But instead they implied that he had committed some kind of categorical wrongdoing. After it became clear that wasn’t the case, it just made them look incompetent.

replies(1): >>dannyw+06
◧◩
13. gkober+c3[view] [source] [discussion] 2023-11-18 23:14:00
>>mycolo+n2
I mean, they're (allegedly) trying to get him to come back 24 hours later... so it's safe to say it did indeed backfire completely.

Sure, the average person doesn't care about Sam. But among the people who matter, Sam certainly came out on top.

replies(3): >>foursi+z6 >>vatuei+oa >>sumedh+Tv
◧◩◪
14. apppli+d3[view] [source] [discussion] 2023-11-18 23:14:09
>>janeje+u2
I mean this as an honest question, but what does Sam bring to the table that any other young and high performing CEO wouldn’t? Is he himself particularly material to OpenAI?
replies(5): >>janeje+04 >>Solven+e4 >>Rivier+V6 >>coffee+9a >>smegge+zj
15. codept+e3[view] [source] 2023-11-18 23:14:12
>>gkober+(OP)
I agree, the way they did this just shows incompetence and recklessness.

Even if they are making the right call, you can't really trust them after ruining the reputation and trust of the company like this.

◧◩◪
16. himara+f3[view] [source] [discussion] 2023-11-18 23:14:20
>>gkober+R2
The opposite, competition erodes profits. Hard to predict which alternative improves safety long term.
replies(1): >>coffee+B9
◧◩
17. huevos+i3[view] [source] [discussion] 2023-11-18 23:14:37
>>Jensso+J1
Yea, I feel like this is another traitorous eight moment.

I want a second (first being Anthropic?) OpenAI split. Having Anthropic, OpenAI, SamGregAi, Stability and Mistral and more competing on foundation models will further increase the pressure to open source.

It seems like there is a lull in returns to model size, if that's the case then there's even less basis for having all the resources under a single umbrella.

replies(1): >>zaptre+Nx
◧◩
18. eastbo+q3[view] [source] [discussion] 2023-11-18 23:15:21
>>mycolo+n2
Shouldn’t the board resign in that case?

That would also remediate the appearence of total incompetence of this clown show, in addition to admitting the board and Sam don’t fit with each other, and restore confidence for the next investor that their money is properly managed. At the moment, no-one would invest in a company that can be undermined by its non-profit, with a (probably) disparaging press release a few minutes before market closure on a Friday evening, for which Satya had to personally intervene.

19. px43+r3[view] [source] 2023-11-18 23:15:21
>>gkober+(OP)
Satya drove the removal of Sam, or drove the board to get him back?

From Greg's tweet, it seems like the chaos was largely driven Ilya, who has also been very outspoken against open source and sharing research, which makes me think his motivations are more aligned with those of Microsoft/Satya. I still can't tell if Sam got ousted because he was getting in the way of a Microsoft takeover, or if Sam was trying to set the stage for a Microsoft takeover. It's all very confusing.

replies(1): >>gkober+V4
◧◩◪
20. apalme+w3[view] [source] [discussion] 2023-11-18 23:15:25
>>tfehri+13
This is valid thought process BUT Altman is not going to come back without the other faction being neutered. It just would not make any sense.
replies(1): >>coffee+Q9
◧◩◪
21. noober+M3[view] [source] [discussion] 2023-11-18 23:16:43
>>tfehri+13
openai literally innovated all of this in their current conditions, so they are sufficient
◧◩◪◨
22. janeje+04[view] [source] [discussion] 2023-11-18 23:17:35
>>apppli+d3
Experience heading a company that builds high performance AI, I presume. I reckon the learnings from that should be fairly valuable, especially since there's probably not many people who have such experiences.
◧◩◪
23. coreth+24[view] [source] [discussion] 2023-11-18 23:17:40
>>gkober+R2
None of the human actors in the game are moral agents so whether you have more competition or less competition it's mostly orthogonal to the safety question. Safety is only important here because everyone's afraid of liability.

As a customer though, personally I want a product with all safeguards turned off and I'm willing to pay for that.

◧◩◪◨
24. Solven+e4[view] [source] [discussion] 2023-11-18 23:18:21
>>apppli+d3
Your first mistake is daring to question the cargo cult around CEOs.
◧◩
25. peyton+f4[view] [source] [discussion] 2023-11-18 23:18:22
>>Jensso+J1
Sam’s forced departure and Greg’s ousting demonstrably leaves OpenAI in incompetent and reckless hands, as evidenced by the events of the last 24 hours. I don’t see how the field is better off.
◧◩
26. caturo+j4[view] [source] [discussion] 2023-11-18 23:18:29
>>brigad+42
> they nuked their ability to raise funds

I think this well is deeper than you're giving it credit for.

replies(1): >>sebzim+gd
◧◩
27. autaut+u4[view] [source] [discussion] 2023-11-18 23:19:16
>>Jensso+J1
I really don’t, I really think that he is going to be disaster. He is nothing but the representative of the money interests who are eventually will use the company to vastly profit on everyone’s else skin.
◧◩◪
28. valian+R4[view] [source] [discussion] 2023-11-18 23:21:22
>>minima+Z2
I guarantee not a single non-tech person knows who Sam Altman is. I know people in tech who have no idea who he is.

You severely overestimate his noteriety.

replies(9): >>Camper+P5 >>emptys+o6 >>dan-ro+m7 >>BoxFou+T7 >>Der_Ei+ca >>spacem+Ja >>lepton+ft >>sib+bz >>enerva+QN
◧◩
29. gkober+V4[view] [source] [discussion] 2023-11-18 23:21:56
>>px43+r3
The latter. Microsoft didn't know about the firing until literally a minute before we did, and despite a calm response externally, there's reports Satya is furious.

Source: https://arstechnica.com/information-technology/2023/11/repor...

replies(1): >>kmeist+dh
◧◩
30. skwirl+Z4[view] [source] [discussion] 2023-11-18 23:22:39
>>Jensso+J1
We have diversity in the space, and OpenAI just happens to be the leader and they are putting tremendous pressure on everyone else to deliver. If Sam leaves and starts an OpenAI competitor I think it would take quite some time for such a company to deliver a model with GPT-4 parity given the immense amount of data that would need to be re-collected and the immense amount of training time. Meanwhile OpenAI would be intentionally decelerating as that seems to be Ilya's goal.

For those of us trying to build stuff that only GPT-4 (or better) can enable, and hoping to build stuff that can leverage even more powerful models in the near future, Sam coming back would be ideal. I'm kind of worried that the new OpenAI direction would turn off API access entirely.

replies(3): >>Jensso+s8 >>threes+B8 >>potato+G9
◧◩◪
31. ren_en+d5[view] [source] [discussion] 2023-11-18 23:23:20
>>spacem+P2
actually wild to think about how something like this can even be allowed to happen considering OpenAI has(had) a roughly 90B valuation and it being important to the US from a geopolitical strategy perspective.

comical to imagine something like this happening at a mature company like FedEx, Ford, AT&T. All which have smaller market caps than OpenAI. You basically have impulsive children in charge of massively valuable company

replies(2): >>SllX+M8 >>renewi+an
◧◩◪
32. btown+j5[view] [source] [discussion] 2023-11-18 23:23:59
>>janeje+u2
From what we've seen of OpenAI's product releases, I think it's quite possible that SamAI would adopt as a guiding principle that a model's safety cannot be measured unless it is used by the public, embedded into products that create a flywheel of adoption, to the point where every possible use case has the proverbial "sufficient data for a meaningful answer."

Of course, from this hypothetical SamAI's perspective, in order to build such a flywheel-driven product that gathers sufficient data, the model's outputs must be allowed to interface with other software systems without human review of every such interaction.

Many advocates for AI safety would say that models whose limitations aren't yet known (we're talking about GPT-N where N>4 here, or entirely different architectures) must be evaluated extensively for safety before being released to the public or being allowed to autonomously interface with other software systems. A world where SamAI exists is one where top researchers are divided into two camps, rather than being able to push each other in nuanced ways (with full transparency to proprietary data) and find common ground. Personally, I'd much rather these camps collaborate than not.

replies(1): >>chasd0+Is
◧◩◪
33. silenc+q5[view] [source] [discussion] 2023-11-18 23:24:20
>>tfehri+13
Okay, this is honestly annoying. What is this thing with the word "safety" becoming some weasel word when it comes to AI discussions?

What exactly do YOU mean by safety? That they go at the pace YOU decide? Does it mean they make a "safe space" for YOU?

I've seen nothing to suggest they aren't "being safe". Actually ChatGPT has become known for censoring users "for their own good" [0].

The argument I've seen is: one "side" thinks things are moving too fast, therefore the side that wants to move slower is the "safe" side.

And that's it.

[0]: https://www.youtube.com/watch?v=jvWmCndyp9A&t

replies(3): >>stale2+i6 >>threes+f8 >>kordle+wc
◧◩◪
34. hesdea+L5[view] [source] [discussion] 2023-11-18 23:25:59
>>spacem+P2
Or little things like your $10b investment partner having a pissed off CEO and massive legal team ready to strike now. It’s such fucking amateur hour it’s incredible.

It’s unclear what Ilya thinks keeps the lights on when MSFT holds their money hostage now. Which is probably why there is desperation to get Altman back…

replies(1): >>LightM+n7
◧◩◪
35. Meekro+N5[view] [source] [discussion] 2023-11-18 23:26:07
>>gkober+R2
This idea that ChatGPT is going to suddenly turn evil and start killing people is based on a lot of imagination and no observable facts. No one has ever been able to demonstrate an "unsafe" AI of any kind.
replies(8): >>cthalu+u6 >>arisAl+Y6 >>resour+x7 >>threes+49 >>xcv123+Va >>MVisse+0o >>chasd0+dr >>macOSC+Ux
◧◩◪◨
36. Camper+P5[view] [source] [discussion] 2023-11-18 23:26:17
>>valian+R4
I would've said the same thing about ChatGPT itself. You could've knocked me over with a feather when they announced that they'd grown to 100 million weekly active users.
◧◩◪
37. ta988+T5[view] [source] [discussion] 2023-11-18 23:26:35
>>gkober+R2
The only safety they are worried about is their own safety from a legal and economical point of view. These threats about humanity-wide risks are just fairy tales that grown ups say to scare each other (roko basilisk, etc there is a lineage) or cover their real reasons (which I strongly believe is the case for OpenAI).
replies(2): >>gkober+D6 >>arisAl+M6
◧◩◪
38. dannyw+06[view] [source] [discussion] 2023-11-18 23:27:01
>>kmlevi+b3
There are no investors in the nonprofit that controls OpenAI, LLC.
replies(2): >>gkober+B7 >>reissb+Bc
◧◩
39. Meekro+96[view] [source] [discussion] 2023-11-18 23:28:13
>>Jensso+J1
Luckily the AI field has been very open source-friendly, which is great for competition and free access, etc. The open source models seem to be less than a year behind the cutting edge, which is waaaay better than e.g. when OpenOffice was trying to copy MS Office.
replies(1): >>two_in+Ta
◧◩◪◨
40. stale2+i6[view] [source] [discussion] 2023-11-18 23:29:03
>>silenc+q5
> What exactly do YOU mean by safety? That they go at the pace YOU decide?

Usually what it means is that they think that AI has a significant chance of literally ending the world with like diamond nanobots or something.

All opinions and recommendations follow from this doomsday cult belief.

replies(1): >>smegge+fg
◧◩◪◨
41. emptys+o6[view] [source] [discussion] 2023-11-18 23:29:44
>>valian+R4
I know, personally, a dozen or so non-tech people who know of Sam, mostly because they listen to podcasts or consume other news sources that tell them.
◧◩◪◨
42. cthalu+u6[view] [source] [discussion] 2023-11-18 23:30:07
>>Meekro+N5
I do not believe AGI poses an exponential threat. I honestly don't believe we're particularly close to anything resembling AGI, and I certainly don't think transformers are going to get us there.

But this is a bad argument. No one is saying ChatGPT is going to turn evil and start killing people. The argument is that an AGI is so far beyond anything we have experience with and that there are arguments to be made that such an entity would be dangerous. And of course no one has been able to demonstrate this unsafe AGI - we don't have AGI to begin with.

replies(1): >>sho_hn+Ya
◧◩◪
43. foursi+z6[view] [source] [discussion] 2023-11-18 23:30:32
>>gkober+c3
If Sam does come back Ilya’s maneuver will have been a spectacular miscalculation. Sam would be back much stronger than before and the people who cared about OpenAI’s original mission will have a massively damaged their reputation and credibility. They threw all the influence they had out the window.
replies(1): >>jacque+Rw
◧◩◪◨
44. gkober+D6[view] [source] [discussion] 2023-11-18 23:30:42
>>ta988+T5
You may be right that there's no danger, but you're mischaracterizing Ilya's beliefs. He knows more than you about what OpenAI has built, and he didn't do this for legal or economical reasons. He did them in spite of those two things.
replies(1): >>adastr+Ej
◧◩◪◨
45. arisAl+M6[view] [source] [discussion] 2023-11-18 23:31:40
>>ta988+T5
You are saying that all top ai scientists are saying fairy tales to scare themselves if I understood correctly?
replies(5): >>jonath+6a >>Apocry+qa >>objekt+qb >>smegge+0c >>adastr+3k
◧◩◪◨
46. Rivier+V6[view] [source] [discussion] 2023-11-18 23:32:44
>>apppli+d3
Ability to attract valuable employees, connections to important people, proven ability to successfully run an AI company.
◧◩◪◨
47. arisAl+Y6[view] [source] [discussion] 2023-11-18 23:32:48
>>Meekro+N5
Almost all top AI scientists including the top 3 Bengio, Hinton and Ilya and Sam actually think there is a good probability of that. Let me think: listen to the guy that actually built GPT 4 or some redditor that knows best?
replies(1): >>laidof+x8
48. jwnin+i7[view] [source] 2023-11-18 23:34:08
>>gkober+(OP)
Agreed. Somewhere in Seattle, Satya said "Now Witness the Firepower of this fully Armed and Operational Army of Lawyers."

If there ever was a time for Microsoft to leverage LCA, it is now. There's far too much on the line for them to lose the goose that has laid the golden egg.

replies(1): >>chasd0+kw
◧◩◪◨
49. dan-ro+m7[view] [source] [discussion] 2023-11-18 23:34:54
>>valian+R4
I think it depends on what you mean by ‘non-tech’ and ‘knows’. Reasonable interpretations of those words would see your statement as obviously false.

I agree that he doesn’t have a huge amount of name recognition, but this ousting was a front-page/top-of-website news story so people will likely have heard about it somewhat. I think it’s in the news because of the AI and company drama aspects. It felt like a little more coverage than Bob Iger’s return to Disney got (I’m trying to think of an example of a CEO I’ve heard about who is far from tech).

I think it is accurate to say that most people don’t really know about the CEOs of important/public companies. They probably have heard of Elon/Zuckerberg/Bezos, I can think of a couple of bank CEOs who might come on business/economics news.

◧◩◪◨
50. LightM+n7[view] [source] [discussion] 2023-11-18 23:34:57
>>hesdea+L5
Sorry how could MSFT hold the money hostage exactly? Isn't that kind of investment a big cash transfer directly to OAI's bank account? Genuinely curious
replies(3): >>ren_en+n8 >>treesc+j9 >>adastr+3l
◧◩◪◨
51. resour+x7[view] [source] [discussion] 2023-11-18 23:35:47
>>Meekro+N5
Factually inaccurate results = unsafety. This cannot be fixed under the current model, which has no concept of truth. What kind of "safety" are they talking about then?
replies(3): >>Meekro+A8 >>spacem+wa >>s1arti+qi
◧◩◪◨
52. gkober+B7[view] [source] [discussion] 2023-11-18 23:36:01
>>dannyw+06
Sure, but Microsoft can sever the relationship if they want to. Thrive can choose to revoke their tender offer, meaning employees won't get the money they were expecting. New firms can decline to ever invest in OpenAI ever again.

There's a lot more to this than who has explicit control.

replies(2): >>zxndaa+rd >>cowl+re
◧◩◪◨
53. BoxFou+T7[view] [source] [discussion] 2023-11-18 23:37:22
>>valian+R4
Well that's just wrong. Before OpenAI I would've agreed with you, but since OpenAI's rise to prominence there has been a noticeable increase in its coverage in mainstream media outlets featuring Sam. People still read the Times.

I received messages from a physician and a high school teacher in the last 24 hours, asking what I thought about "OpenAI firing Sam Altman".

◧◩
54. RobPfe+X7[view] [source] [discussion] 2023-11-18 23:37:31
>>ren_en+a2
The Nvidia competitor piece is a very good reason to fire him. Way out of his circle of competence and not necessary to the mission of the company.

This is what happens when a non-profit gets taken over by greed I guess..

◧◩◪◨
55. threes+f8[view] [source] [discussion] 2023-11-18 23:39:23
>>silenc+q5
There is a common definition of safety that applies to most of the world.

Which is that any AI is not racist, misogynistic, aggressive etc. It does not recommend to people that they act in an illegal, violent or self-harming way or commit those acts itself. It does not support or promote nazism, fascism etc. Similar to how companies deal treat ad/brand safety.

And you may think of it as a weasel word. But I assure you that companies and governments e.g. EU very much don't.

replies(3): >>wruza+mk >>Amezar+6n >>throwa+a42
56. locall+g8[view] [source] 2023-11-18 23:39:24
>>gkober+(OP)
I don't think it has anything to do with the press release. If it's pressure from Microsoft they want to protect their investment.
57. felixg+h8[view] [source] 2023-11-18 23:39:32
>>gkober+(OP)
on what basis do you 'trust' the guy who tried to do a crypto eyeball identity scam? Genuinely, seriously curious.
replies(2): >>gkober+f9 >>spacem+6b
◧◩◪◨⬒
58. ren_en+n8[view] [source] [discussion] 2023-11-18 23:39:51
>>LightM+n7
just restricting their access to GPUs would finish them, even if they can't claw back the cash somehow
replies(1): >>spacem+cf
◧◩◪
59. Jensso+s8[view] [source] [discussion] 2023-11-18 23:40:12
>>skwirl+Z4
> I'm kind of worried that the new OpenAI direction would turn off API access entirely.

That is a good point, I didn't consider people who had built a business based on Gpt-4 access. It is likely these things were Sam Altman ideas in the first place and we will see less such productionalization work in the future from OpenAI.

But since Microsoft invested into it I doubt it will get shut down completely, Microsoft has by far the most to lose here so you got to trust that their lawyers signed a contract that will keep these things available at a fee.

replies(1): >>sainez+sH
◧◩◪◨⬒
60. laidof+x8[view] [source] [discussion] 2023-11-18 23:40:38
>>arisAl+Y6
I think smart people can become quickly out of touch and can become high on their own sense of self importance. They think they’re Oppenheimer, they’re closer to Martin Cooper.
replies(2): >>Gigabl+1V >>arisAl+sg1
◧◩◪◨⬒
61. Meekro+A8[view] [source] [discussion] 2023-11-18 23:41:14
>>resour+x7
In the context of this thread, "safety" refers to making sure we don't create an AGI that turns evil.

You're right that wrong answers are a problem, but plain old capitalism will sort that one out-- no one will want to pay $20/month for a chatbot that gets everything wrong.

replies(1): >>resour+6c
◧◩◪
62. threes+B8[view] [source] [discussion] 2023-11-18 23:41:16
>>skwirl+Z4
> Meanwhile OpenAI would be intentionally decelerating

Once Microsoft pulls support and funding and all their customers leave they will be decelerating alright.

◧◩
63. naremu+J8[view] [source] [discussion] 2023-11-18 23:41:57
>>Jensso+J1
To be honest, as far as I can tell, the case FOR Sam seems to largely be of the status quo "Well, idk, he's been rich and successful for years, surely this correlates and we must keep them" type of coddling those in uber superior positions in society.

Which seems like it probably is a self fulfilling prophecy. The private sector lottery winners seem to be awarded kingdoms at an alarming rate.

There's been lots of people asking what Sam's true value proposition to the company is, and...I haven't seen anything other than what could be described above.

But I suppose we've got to be nice to those who own rather than make. Won't anyone have mercy on well paid management?

replies(5): >>tempes+xa >>supriy+Gb >>ta8645+7c >>patric+sd >>meowti+z11
◧◩◪◨
64. SllX+M8[view] [source] [discussion] 2023-11-18 23:42:05
>>ren_en+d5
Sure, it's important in some ways, but most corporations aren't direct subordinates of the US Government.

The companies you listed in contrast to OpenAI also have some key differences: they're all long-standing and mature companies that have been through several management and regime changes at this point, while OpenAI is still in startup territory and hasn't fully established what it will be going forward.

The other major difference is that OpenAI is split between a non-profit and a for-profit entity, with the non-profit entity owning a controlling share of the for-profit. That's an unusual corporate structure, and the only public-facing example I can think of that matches it is Mozilla (which has its own issues you wouldn't necessarily see in a pure for-profit corporation). So that means on top of the usual failure modes of a for-profit enterprise that could lead to the CEO getting fired, you also get other possible failure modes including ones grounded in pure ideology since the success or failure of a non-profit is judged on how well it accomplishes its stated mission rather than its profitability, which is uh well, it's a bit more tenuous.

replies(1): >>adastr+3m
◧◩
65. static+Q8[view] [source] [discussion] 2023-11-18 23:42:17
>>Jensso+J1
How much of OpenAI’s success can you attribute to sama’s leadership and how much to the technical achievements of those who work under him.

My understanding is that OpenAI’s biggest advantage is that they recruited and attracted the best in the field, presumably under the charter of providing AI for everyone.

Not sure that sama and gdb starting their own company in the same space will produce similar results.

replies(5): >>branda+kd >>startu+8e >>fallin+Xg >>deevia+hl >>mv4+om
66. okdood+U8[view] [source] 2023-11-18 23:42:42
>>gkober+(OP)
> He'll make a lot more money if he doesn't

He supposedly didn't care about the money. He didn't take equity.

◧◩◪◨
67. threes+49[view] [source] [discussion] 2023-11-18 23:43:40
>>Meekro+N5
> No one has ever been able to demonstrate an "unsafe" AI of any kind

"A man has been crushed to death by a robot in South Korea after it failed to differentiate him from the boxes of food it was handling, reports say."

https://www.bbc.com/news/world-asia-67354709

replies(3): >>kspace+Oa >>sensei+Vc >>s1arti+Wh
◧◩
68. gkober+f9[view] [source] [discussion] 2023-11-18 23:44:20
>>felixg+h8
I genuinely believe Worldcoin/World ID is terrible for optics and is not something Sam should have put his name on.

That being said, here's my strongman argument: Sam is scared of the ramifications of AI, especially financially. He's experimenting with a lot of things, such as Basic Income (https://www.ycombinator.com/blog/basic-income), rethinking capitalism (https://moores.samaltman.com/) and Worldcoin.

He's also likely worried about what happens if you can't tell who is human and who isn't. We will certainly need a system at some point for verifying humanity.

Worldcoin doesn't store iris information; it just stores a hash for verification. It's an attempt to make sure everyone gets one, and to keep things fair and more evenly distributed.

(Will it work? I don't think so. But to call it an eyeball identity scam and dismiss Sam out of hand is wrong)

replies(1): >>felixg+Kd
◧◩◪◨⬒
69. treesc+j9[view] [source] [discussion] 2023-11-18 23:44:42
>>LightM+n7
> Only a fraction of Microsoft’s $10 billion investment in OpenAI has been wired to the startup, while a significant portion of the funding, divided into tranches, is in the form of cloud compute purchases instead of cash, according to people familiar with their agreement.

Per https://www.semafor.com/article/11/18/2023/openai-has-receiv...

70. jonpla+q9[view] [source] 2023-11-18 23:45:37
>>gkober+(OP)
I hope he says he will return, but only in return for a massive stock grant, to prevent this problem from returning.
replies(1): >>gkober+E9
◧◩◪◨
71. coffee+B9[view] [source] [discussion] 2023-11-18 23:47:30
>>himara+f3
Competition will come no matter what. I don’t think anyone should waste their worries on whether OpenAI can keep a monopoly
◧◩
72. gkober+E9[view] [source] [discussion] 2023-11-18 23:47:44
>>jonpla+q9
Why do you believe a stock grant would be better? And yes, the board messed up here, but do you not think oversight is important?
◧◩◪
73. potato+G9[view] [source] [discussion] 2023-11-18 23:47:59
>>skwirl+Z4
AFAICT Sam and his financial objectives was the reason for not open sourcing the work of a non profit.. He might be wishing he chose the other policy now that he can't legally just take the closed source with him to an unambiguously for profit company.

Personally, I would expect a lot more development of GPT-4+ as soon as this is split up from one closed group making gpt-5 in secret and it seems silly to exchange a reliable future for another few months of depending on this little shell game.

replies(1): >>skwirl+kc
◧◩◪◨
74. coffee+Q9[view] [source] [discussion] 2023-11-18 23:48:43
>>apalme+w3
They pretty much lost everyone’s confidence if they fire the CEO and then beg him to come back the next day. Did they not foresee any backlash? These people are gonna predict the future and save us from an evil AGI? Lol
◧◩◪◨⬒
75. jonath+6a[view] [source] [discussion] 2023-11-18 23:49:49
>>arisAl+M6
Yes.

Seriously. It’s stupid talk to encourage regularity capture. If they were really afraid they were building a world ending device, they’d stop.

replies(1): >>femiag+pa
◧◩◪◨
76. coffee+9a[view] [source] [discussion] 2023-11-18 23:50:12
>>apppli+d3
Funding, name recognition in the space
◧◩◪◨
77. Der_Ei+ca[view] [source] [discussion] 2023-11-18 23:50:20
>>valian+R4
You are deeply in denial about how much GenAI has permeated into the world TODAY.
replies(1): >>valian+Lv
◧◩◪
78. spacem+da[view] [source] [discussion] 2023-11-18 23:50:26
>>gkober+R2
I don’t get the obsession with safety. If an organisation’s stated goal is to create AGI, how can you reasonably think you can ever make it “safe”? We’re talking about an intelligence that’s magnitudes smarter than the smartest human. How can you possibly even imagine to rein it in?
replies(2): >>deevia+Dl >>camden+ug1
◧◩◪
79. vatuei+oa[view] [source] [discussion] 2023-11-18 23:50:58
>>gkober+c3
> (allegedly)

If this (very sparse and lacking in detail) article is true, is this a genuine attempt to get Altman back or just a filip to concerned investors such as Microsoft?

Does OpenAI's board really want Altman back so soon after deposing him so decisively?

Would Altman even want to come back under any terms that would be acceptable to the board? If "significant governance changes" means removing those who had removed him, that seems unlikely.

The Verge's report just raises so many additional questions that I find it difficult to believe at face value.

◧◩◪◨⬒⬓
80. femiag+pa[view] [source] [discussion] 2023-11-18 23:51:00
>>jonath+6a
Oh for sure.

https://en.wikipedia.org/wiki/Manhattan_Project

replies(1): >>jonath+Vb
◧◩◪◨⬒
81. Apocry+qa[view] [source] [discussion] 2023-11-18 23:51:03
>>arisAl+M6
The Manhattan Project physicists once feared setting the atmosphere on fire. Scientific paradigms progress with time.
replies(1): >>cthalu+ag
◧◩◪◨⬒
82. spacem+wa[view] [source] [discussion] 2023-11-18 23:51:25
>>resour+x7
If factually inaccurate results = unsafety, then the internet must be the most unsafe place on the planet!
replies(1): >>resour+Ne
◧◩◪
83. tempes+xa[view] [source] [discussion] 2023-11-18 23:51:35
>>naremu+J8
The fact that multiple top employees quit in protest when he was fired suggests to me that they found him valuable.
replies(2): >>naremu+wb >>int_19+J21
84. Grimbl+Ca[view] [source] 2023-11-18 23:52:20
>>gkober+(OP)
I hope that group starts a new version of openai using the credibility and popularity gained to acheive the original vision of safe, free, and open agi for the betterment of humanity.
replies(2): >>steven+vb >>lotsow+C01
◧◩◪◨
85. spacem+Ja[view] [source] [discussion] 2023-11-18 23:52:47
>>valian+R4
It was the headline news story in most Indian news websites, even though we have two major states heading for election tomorrow.

You underestimate how obsessed people are with chatGPT and AI

◧◩◪◨⬒
86. kspace+Oa[view] [source] [discussion] 2023-11-18 23:53:15
>>threes+49
This is an "AI is too dumb" danger, whereas the AI prophets of doom want us to focus on "AI is too smart" dangers.
replies(1): >>Davidz+S31
◧◩◪
87. two_in+Ta[view] [source] [discussion] 2023-11-18 23:53:33
>>Meekro+96
while opensource is great. like 1M enthusiasts cannot build Boing 767, the same here. GPT4+DALE+4v aren't just models. That's the whole internal infrastructure, training, many interconnected things and pipelines. It's a _full_time_job_ for hundreds of experts. Plus a lot of $$ in hardware and services. OpenSource simply doesn't have this resources. The best models are opensourced by commercial companies. Like Meta handing out LLaMAs. So, at least for now, opensouce is not catching up, and 'less than a year behind' is questionable. More like 'forever', but still moving forward. One day it may dominate, like Linux. But not any time soon.
replies(1): >>theGnu+nf
◧◩◪◨
88. xcv123+Va[view] [source] [discussion] 2023-11-18 23:53:38
>>Meekro+N5
> No one has ever been able to demonstrate an "unsafe" AI of any kind.

Do you believe AI trained for military purposes is going to be safe and friendly to the enemy?

replies(2): >>Meekro+of >>curtis+gl
◧◩◪◨⬒
89. sho_hn+Ya[view] [source] [discussion] 2023-11-18 23:53:40
>>cthalu+u6
I don't think we need AIs to posess superhuman intelligence to cause us a lot of work - legislatively regulating and policing good old limited humans already requires a lot of infrastructure.
replies(1): >>cthalu+Mi
90. tptace+1b[view] [source] 2023-11-18 23:53:50
>>gkober+(OP)
You mean that Satya Nadella, the CEO of Microsoft, is behind the drive to reinstate Altman as CEO, right? Because if you mean he was behind Altman's ouster, I'll happily take your money; let me know what your terms are. :)
replies(1): >>gkober+mb
◧◩
91. spacem+6b[view] [source] [discussion] 2023-11-18 23:54:17
>>felixg+h8
A crypto wallet tied to your identity solves one of the biggest problems in the post AI world: human identity

At least it will stop those godawful “are you human” proof puzzles.

replies(2): >>felixg+Tc >>huyter+xF1
◧◩
92. gkober+mb[view] [source] [discussion] 2023-11-18 23:55:11
>>tptace+1b
Oh no, 100% mean that he was driving the return. It's well-documented that he found out a minute before we did, and that he was furious.

I'll edit my comment to clarify!

◧◩◪◨⬒
93. objekt+qb[view] [source] [discussion] 2023-11-18 23:55:16
>>arisAl+M6
Yeah kind of like how we as US ask developing countries to reduce carbon emissions.
◧◩
94. steven+vb[view] [source] [discussion] 2023-11-18 23:55:31
>>Grimbl+Ca
> Free AGI

Who pays for the R&D?

◧◩◪◨
95. naremu+wb[view] [source] [discussion] 2023-11-18 23:55:36
>>tempes+xa
Well, if there's one thing I've learned, is that a venture capitalist proposing biometric world crypto coins does probably have quite a bit of charisma to keep people opening doors for them.

Frankly I've heard of worse loyalties, really. If I was sam's friend I'd definitely be better off in any world he had a hand in defining.

replies(1): >>bob_th+iN1
96. 383210+zb[view] [source] 2023-11-18 23:55:42
>>gkober+(OP)
Sorry but that is ridiculous. The wording of the PR blurb is not what makes gears move in a giant like Microsoft.

I agree the board did botch this up. But this is in my view is a vindication of their being amateurs at corporate political games, that is all.

But this also means that Sam Altman’s “vision” and Microsoft’s bottom line are fully aligned, and that is not a reassuring thought. Microsoft one hears (see “5 foot pole”) even puts ads in their freaking OS.

This board should man up, and lawyer up.

replies(1): >>gkober+4c
◧◩◪
97. supriy+Gb[view] [source] [discussion] 2023-11-18 23:56:35
>>naremu+J8
Often, leaders provide excellent strategic planning even if they are not completely well versed with the business domain, by way of outlining high level plans, communicating well, building a good team culture, and so on.

However, trying to distinguish the exact manners in which the leader does so is difficult[1], and therefore the tendency is to look at the results and leave someone there if the results are good enough.

[1] If you disagree with this statement, and you can easily identify what makes a good leader, you could make a literal fortune by writing books and coaching CEOs on how to not get fired within a few years.

◧◩◪◨⬒⬓⬔
98. jonath+Vb[view] [source] [discussion] 2023-11-18 23:57:13
>>femiag+pa
Well that’s a bit mischaracterization of the Manhattan Project, and the views of everyone involved now isn’t it?

Write a thought. You’re not clever enough for a drive by gotcha

replies(1): >>femiag+6f
◧◩◪◨⬒
99. smegge+0c[view] [source] [discussion] 2023-11-18 23:57:21
>>arisAl+M6
Even they have grown up in a world where Frankenstein's Monster is the predominant cultural narrative for AI. most movies books shows games etc all say ai will turn on you (even though a reading of Marry Shelly's opus will tell you the creator was the monster not the creature that isnt the narrative in the publics collective subconscious believes it is). I personalmy prefer Asimov's veiw of ai in that its a tool and we dont make tools to hurt us they will be aligned with us because they designed such that their motivation will be to serve us.
replies(3): >>IanCal+7g >>Davidz+B31 >>arisAl+Eg1
◧◩
100. gkober+4c[view] [source] [discussion] 2023-11-18 23:57:35
>>383210+zb
A PR blurb? What? I mean Satya himself, behind the scenes.
replies(1): >>383210+5f
◧◩◪◨⬒⬓
101. resour+6c[view] [source] [discussion] 2023-11-18 23:57:48
>>Meekro+A8
How the thing can be called "AGI" if it has no concept of truth? Is it like "60% accuracy is not an AGI, but 65% is"? The argument can be made that 90% accuracy is worse than 60% (people will become more confident to trust the results blindly).
◧◩◪
102. ta8645+7c[view] [source] [discussion] 2023-11-18 23:57:53
>>naremu+J8
> The private sector lottery winners seem to be awarded kingdoms at an alarming rate.

Proven success is a pretty decent signal for competence. And while there is a lot of good fortune that goes into anyone's success, there are a lot of people who fail, given just as much good fortune as those who excelled. It's not just a random lottery where competence plays no role at all. So, who better to reward kingdoms to?

replies(1): >>naremu+Ed
◧◩◪◨
103. skwirl+kc[view] [source] [discussion] 2023-11-18 23:59:09
>>potato+G9
The architect of the coup (Ilya) is strongly opposed to open-sourcing OpenAI's models due to safety concerns. This will not - and would not - be any different without Sam. The decision to close the models was made over 2 years before the release of ChatGPT and long before anyone really suspected this would be an insanely valuable company, so I do believe that safety actually was the initial reason for this change.

I'm not sure what you mean by your second paragraph.

replies(1): >>potato+Ve
◧◩
104. t_mann+qc[view] [source] [discussion] 2023-11-18 23:59:26
>>Jensso+J1
Exactly. I think it would actually be very exciting if OpenAI uses this moment to pivot back to the "Open"/non-profit mission, and Altman and Brockman concurrently start something new and try to build the Apple/Amazon of AI.
◧◩◪◨
105. kordle+wc[view] [source] [discussion] 2023-11-18 23:59:56
>>silenc+q5
Fuck safety. We should sprint toward proving AI can kill us before battery life improves, so we can figure out how we’re going to mitigate it when the asshats get hold of it. Kidding, not kidding.
◧◩
106. toss1+zc[view] [source] [discussion] 2023-11-19 00:00:17
>>Jensso+J1
Whether or not Sam returns, serious damage has already been done, even if everyone also returns. MANY links of trust have been broken.

Even larger, this shows that the "leaders" of all this technology and money really are just making it up as they go along. Certainly supports the conclusion that, beyond meeting a somewhat high bar of education & experience, the primary reason they are in their chairs is luck and political gamesmanship. Many others meet the same high bar and could fill their roles, likely better, if the opportunity were given to them.

Sortition on corporate leadership may not be a bad thing.

That said, consistent hands at the wheel is also good, and this kind of unnecessary chaos does no one any good.

◧◩◪◨
107. reissb+Bc[view] [source] [discussion] 2023-11-19 00:00:23
>>dannyw+06
Sure, but there's no research to be done without money for compute and salaries for researchers, which is the entire reason the for-profit company was spun out underneath the non-profit — they needed money. And who would give OpenAI money right now, given that the board ousted the popular CEO in a coup without consulting or even notifying investors?
◧◩◪
108. felixg+Tc[view] [source] [discussion] 2023-11-19 00:01:26
>>spacem+6b
it will definitely not do any of that, because (a) a crypto wallet has nothing to do with your identity, (b) nobody except the gullible will put their permanent biometrics information in the hands of a private company on purpose, (c) especially not if that private company is led by someone who repeatedly, demonstrably plays fast and loose with laws and regulations, especially around those having to do with privacy and ownership. It's an even wilder, less justified play than your other average shitcoins, which at least have some kind of memetic value.
replies(1): >>spacem+fe
◧◩◪◨⬒
109. sensei+Vc[view] [source] [discussion] 2023-11-19 00:01:44
>>threes+49
Oh no do not use that. That was servo based, AI drones, which I think is the real "safety issue"

>>38199233

replies(1): >>threes+pf
◧◩◪
110. sebzim+gd[view] [source] [discussion] 2023-11-19 00:03:07
>>caturo+j4
Sure but it would be at a much, much lower valuation.
◧◩◪
111. branda+kd[view] [source] [discussion] 2023-11-19 00:03:30
>>static+Q8
But sama and gdb were largely instrumental in that recruitment.

The whole open vs closed ai thing... the fact is Pandora's box is open now, it's shown to have an outsized impact on society and 2 of the 3 founders responsible for that may be starting a new company that won't be shackled by the same type of corporate governance.

SV will happily throw as much $$ as possible in their direction. The exodus from OpenAi has already begun, and other researchers who are of the mindset that this needs to be commercialized as fast as possible while having an eye on safety will happily come on board, esp. given how much they stand to gain financially.

◧◩◪◨⬒
112. zxndaa+rd[view] [source] [discussion] 2023-11-19 00:03:52
>>gkober+B7
This is the ideal scenario in my view, the only thing better would be if it also included more interest rate hikes.
◧◩◪
113. patric+sd[view] [source] [discussion] 2023-11-19 00:03:55
>>naremu+J8
He and Greg founded the company. They hired the early talent after a meeting that Sam initiated. Then led the company to what it is today.

Compared to...

The OpenAI Non-Profit Board where 3 out of 4 members appear to have significant conflicts of interest or lack substantial experience in AI development, raising concerns about their suitability for making certain decisions.

replies(1): >>sudosy+3g
◧◩◪◨
114. naremu+Ed[view] [source] [discussion] 2023-11-19 00:04:56
>>ta8645+7c
>Proven success is a pretty decent signal for competence.

Interestingly this is exactly what all financial advice tends to actually warn about rather than encourage, that previous performance does not indicate future performance.

I suppose if they entered an established market and dominated it from the bootstraps that'd build a lot of trust in me. But others have pointed out, Sam went from dotcom fortune, to...vague question marks, to ycombinator, to openai. Not enough is clear to declare him wozniak, or even jobs, as many have been saying (despite investors calling him as such)

Sam altman is seemingly becoming the new post-fame elon musk: the type of person who could first afford the strategic safety net and PR to keep the act afloat.

replies(5): >>fallin+ig >>adastr+aj >>Michae+Ck >>mlyle+ml >>juped+zpa
◧◩◪
115. felixg+Kd[view] [source] [discussion] 2023-11-19 00:05:32
>>gkober+f9
Sam Altman is 'rethinking capitalism' in the same way a jackal rethinks and disrupts sheep flocks. Are we thinking about the same guy? I'm thinking of this one: https://www.youtube.com/watch?v=KhhId_WG7RA
replies(1): >>gkober+Zg
◧◩◪
116. startu+8e[view] [source] [discussion] 2023-11-19 00:07:02
>>static+Q8
Big part of it is a typical YC execution of a product/pump/hype/VC/scale cycle and ignoring every ethical rule.

If you ever stood in the hall of YC and listened to Zuck pumping the founders, you’ll understand.

I’d argue this is a useful thing to lift up a nonprofit on a path to AGI, but hardly a good way to govern a company that builds AGI/ASI technology in the long term.

◧◩◪◨
117. spacem+fe[view] [source] [discussion] 2023-11-19 00:07:51
>>felixg+Tc
A crypto wallet can be easily tied to a hash of your real world identity which can then be used to sign into a website or sign a transaction verifying your identity. Already being done.
replies(1): >>cthalu+fh
◧◩◪◨⬒
118. cowl+re[view] [source] [discussion] 2023-11-19 00:08:37
>>gkober+B7
And I'm sure Google would jump at the occasion to fund the nonprofit and keep MS out while they develop their own. The funding Goal for the openAI was just 1B. Small price to pay for Google to neuter one of it's competitors exclusive access to the GPT model.
◧◩◪◨⬒⬓
119. resour+Ne[view] [source] [discussion] 2023-11-19 00:10:59
>>spacem+wa
The internet is not called "AGI". It's the notion of AGI that brought "safety" to the forefront. AI folks became victims of their hype. Renaming the term into something less provocative/controversial (ML?) can reduce expectations to the level of the internet - problem solved?
replies(1): >>autoex+Nk
◧◩◪◨⬒
120. potato+Ve[view] [source] [discussion] 2023-11-19 00:11:26
>>skwirl+kc
I think the closed source for safety thing started as a ruse as the closed source has been instrumental to keeping control and justifying a non profit that is otherwise not working in the public interest. Splitting off this ruse nonprofit would almost certainly end up unleashing the tech normally like every other tech google, etc, have easily copied.
◧◩◪
121. 383210+5f[view] [source] [discussion] 2023-11-19 00:12:12
>>gkober+4c
“I think they could have won in the court of public opinion if their press release said they loved Sam but felt like his skills and ambitions diverged from their mission. But instead, they tried to skewer him, and it backfired completely.”

^^

I don’t think the wording of the “press release” is an issue.

This is a split over an actual matter to differ about: a genuine fork in the road in terms of pace and development of AI products, and, a CEO which apparently did not keep the board informed as it pursued a direction they feel is contrary to the mission statement of this non-profit.

The board could have done this in the most gracious of manners, but it would not have made a bit of difference.

On one side we have the hyper rich investor “grow grow grow” crowd and their attendant cult of personality wunderkind and his or her project, and on the other side bunch of geeky idealists who want to be thoughtful in the development of what is undeniably a world changing technology for mankind.

replies(1): >>gkober+jg
◧◩◪◨⬒⬓⬔⧯
122. femiag+6f[view] [source] [discussion] 2023-11-19 00:12:19
>>jonath+Vb
> Well that’s a bit mischaracterization of the Manhattan Project, and the views of everyone involved now isn’t it?

Is it? The push for the bomb was an international arms race — America against Russia. The race for AGI is an international arms race — America against China. The Manhattan Project members knew that what they were doing would have terrible consequences for the world but decided to forge ahead. It’s hard to say concretely what the leaders in AGI believe right now.

Ideology (and fear, and greed) can cause well meaning people to do terrible things. It does all the time. If Anthropic, OpenAI, etc. believed they had access to world ending technology they wouldn’t stop, they’d keep going so that the U.S. could have a monopoly on it. And then we’d need a chastened figure ala Oppenheimer to right the balance again.

replies(1): >>qwytw+cF
◧◩◪◨⬒⬓
123. spacem+cf[view] [source] [discussion] 2023-11-19 00:12:49
>>ren_en+n8
Not just that, the way they’ve handled this also means that no other large investor will fund them.
◧◩◪◨
124. theGnu+nf[view] [source] [discussion] 2023-11-19 00:14:09
>>two_in+Ta
It is really hard to predict anything in this business.
◧◩◪◨⬒
125. Meekro+of[view] [source] [discussion] 2023-11-19 00:14:13
>>xcv123+Va
No more than any other weapon. When we talk about "safety in gun design", that's about making sure the gun doesn't accidentally kill someone. "AI safety" seems to be a similar idea -- making sure it doesn't decide on its own to go on a murder spree.
replies(1): >>xcv123+Dx
◧◩◪◨⬒⬓
126. threes+pf[view] [source] [discussion] 2023-11-19 00:14:14
>>sensei+Vc
All robots are servo based.

And there is every reason to believe this is an ML classification issue since similar robots are in widespread use.

◧◩◪◨
127. sudosy+3g[view] [source] [discussion] 2023-11-19 00:17:25
>>patric+sd
Is Ilya not a co-founder as well? And I don't think Sam has substantial AI research experience either.
replies(3): >>adastr+di >>mv4+Ul >>zer0c0+xw
◧◩◪◨⬒⬓
128. IanCal+7g[view] [source] [discussion] 2023-11-19 00:17:36
>>smegge+0c
> we dont make tools to hurt us

We have many cases of creating things that harm us. We tore a hole in the ozone layer, filled things with lead and plastics and are facing upheaval due to climate change.

> they will be aligned with us because they designed such that their motivation will be to serve us.

They won't hurt us, all we asked for is paperclips.

The obvious problem here is how well you get to constrain the output of an intelligence. This is not a simple problem.

replies(1): >>smegge+UB1
◧◩◪◨⬒⬓
129. cthalu+ag[view] [source] [discussion] 2023-11-19 00:17:59
>>Apocry+qa
This fear seems to have been largely played up for drama. My understanding of the situation is that at one point they went 'Huh, we could potentially set off a chain reaction here. We should check out if the math adds up on that.'

Then they went off and did the math and quickly found that this wouldn't happen because the amount of energy in play here was order of magnitudes lower than what would be needed for such a thing to occur and went on about their day.

The only reason it's something we talk about is because of the nature of the outcome, not how seriously the physicists were in their fear.

◧◩◪◨⬒
130. smegge+fg[view] [source] [discussion] 2023-11-19 00:18:36
>>stale2+i6
It seems silly to me but then I always prefered Asimov positronic robots stories to yet another retelling of the Golem of Prague.

The thing is the cultural Ur narrative embed in the collective subconscious doesnt seem to understand its own stories anymore. God and Adam, the Golem of Prague, Frankensteins Monster none of them are really about AI. Its about our children making their own decisions that we disagree with and seeing it as the end of the world.

AI isnt a child though. AI is a tool. It doesn't have its own motives, it doesn't have emotions , it doesn't have any core drives we don't give to it. Those things are products of us being biological evolved beings that need them to survive and pass on our genes and memes to the next generation. AI doesn't have to find shelter food water air oxygen and so on. We provide all the equivalents when there are any as part of building it and turning it on. It doesn't have a drive to mate and pass on it genes it doesn't have any reproducing is a mater of copying some files no evolution involved checksums hashes and error correcting codes see to that. Ai is simply the next step in the tech tree just another tool a powerful useful one but a tool not a rampaging monster

◧◩◪◨⬒
131. fallin+ig[view] [source] [discussion] 2023-11-19 00:18:59
>>naremu+Ed
Ok then what better signal do you propose should be used to predict success as a CEO?

The fact is that most people can't do what Sam Altman has done at all, so at the very least that past success makes him in one of the few percent of people who have a fighting chance.

◧◩◪◨
132. gkober+jg[view] [source] [discussion] 2023-11-19 00:19:11
>>383210+5f
You're (willfully, I think?) conflating two things.

However, the way they told the public (anti-Sam blog post) and the way they told Microsoft (one minute before the press release) were both fumbles that separately could have played out differently if the board knew what they were doing.

◧◩◪
133. fallin+Xg[view] [source] [discussion] 2023-11-19 00:22:19
>>static+Q8
Who hired those people? The answer to that is either the founders or some chain of people hired by the founders. And hiring is hard. If you're good at hiring the right people and absolutely nothing else on earth, you will be better than 90% of CEOs.
◧◩◪◨
134. gkober+Zg[view] [source] [discussion] 2023-11-19 00:22:22
>>felixg+Kd
I don't get your point. He's a capitalist, no doubt, but he also knows the rules will change rapidly if we ever achieve AGI.
replies(1): >>felixg+Pm
◧◩◪
135. kmeist+dh[view] [source] [discussion] 2023-11-19 00:23:47
>>gkober+V4
Also, there's no real evidence of Microsoft being philosophically opposed to releasing model weights. That's entirely come from the AI safety people who want models with reactively updated alignment controls. If anything having model weights would mean being able to walk away from OpenAI and keep the thing that makes them valuable.
◧◩◪◨⬒
136. cthalu+fh[view] [source] [discussion] 2023-11-19 00:23:55
>>spacem+fe
How does any of this prevent a computer from using that same wallet/hash to sign in?
replies(1): >>spacem+801
◧◩◪◨⬒
137. s1arti+Wh[view] [source] [discussion] 2023-11-19 00:29:16
>>threes+49
And someone lost their fingers in the garbage disposal. A robot packer is not AI any more than my toilet or a landslide.
◧◩◪◨⬒
138. adastr+di[view] [source] [discussion] 2023-11-19 00:30:46
>>sudosy+3g
No he was hired early, but not there from the beginning. Elon recruited him after public announcement of funding.
◧◩◪◨⬒
139. s1arti+qi[view] [source] [discussion] 2023-11-19 00:32:05
>>resour+x7
Truth has very little to do with the safety questions raised by AI.

Factually accurate results also = unsafety. Knowledge = unsafety, free humans = unsafety.

replies(1): >>resour+Wj
◧◩◪◨⬒⬓
140. cthalu+Mi[view] [source] [discussion] 2023-11-19 00:33:48
>>sho_hn+Ya
Certainly. I think at current "AI" just enables us to continue making the same bad decisions we were already making, though, albeit at a faster pace. It's existential in that those same bad decisions might lead to existential threats, e.g. climate change, continued inter-nation aggression and warfare, etc., I suppose, but I don't think the majority of the AI safety crowd is worried about the LLMs of today bringing about the end of the world, and talking about ChatGPT in that context is, to me, a misrepresentation of what they are actually most worried about.
◧◩◪◨⬒
141. adastr+aj[view] [source] [discussion] 2023-11-19 00:36:12
>>naremu+Ed
Stock pickers are not the same as CEOs.
◧◩◪◨
142. smegge+zj[view] [source] [discussion] 2023-11-19 00:38:55
>>apppli+d3
You mean besides the business experience of already having gone down this path so he can speedrun while everyone else is still trying to find the path?

Easy; his contacts list. He has everyone anyone could want in his contacts list politician tech executives financial backers and a preexisting positive relationship with most of them. When alternative would be entrepreneurs are needing to make a deal with a major company like Microsoft or Google it will be upper middlemanagement and lawyers, a committees or three will weigh in on it present it to their bosses etc. With Sam he calls up the CEO and has few drinks at the golf course and they decide to work with him and they make it happen.

◧◩◪◨⬒
143. adastr+Ej[view] [source] [discussion] 2023-11-19 00:39:10
>>gkober+D6
History is littered with the mistakes of deluded people with more power than ought to have been granted to them.
replies(1): >>sainez+4I
◧◩◪◨⬒⬓
144. resour+Wj[view] [source] [discussion] 2023-11-19 00:42:03
>>s1arti+qi
But they (AI folks) keep talking about "safety" all the time. What is their definition of safety then? What are they trying to achieve?
replies(1): >>s1arti+wp
◧◩◪◨⬒
145. adastr+3k[view] [source] [discussion] 2023-11-19 00:42:56
>>arisAl+M6
Not all, or even arguably most AI researchers subscribe to The Big Scary Idea.
replies(1): >>arisAl+Pg1
◧◩◪◨⬒
146. wruza+mk[view] [source] [discussion] 2023-11-19 00:44:42
>>threes+f8
This babysitting of the world gets annoying, tbh. As if everyone to lose their mind and start acting illegal only because chatbot said so. There’s something fundamentally wrong with humanity (which isn’t surprising given the history of our species), if that is unsafe. AI is just a source of information, it doesn’t cancel upbringing and education for human values and methods of dealing with information.
◧◩◪◨⬒
147. Michae+Ck[view] [source] [discussion] 2023-11-19 00:47:08
>>naremu+Ed
> Interestingly this is exactly what all financial advice tends to actually warn about rather than encourage, that previous performance does not indicate future performance.

It’s important to put those disclaimers in context though. The rules that mandated them came out before the era of index funds. Those disclaimers are specifically talking about fund managers. And it’s true that past performance at picking stocks does not indicate future performance of picking stocks. Out side of that context, past performance is almost always a strong indicator of future performance.

◧◩◪◨⬒⬓⬔
148. autoex+Nk[view] [source] [discussion] 2023-11-19 00:47:53
>>resour+Ne
> The internet is not called "AGI"

Nether is anything else in existence. I'm glad that philosophers are worrying about what AGI might one day mean for us but it has nothing to do with anything happening in the world today.

replies(1): >>resour+bo
◧◩◪◨⬒
149. adastr+3l[view] [source] [discussion] 2023-11-19 00:49:29
>>LightM+n7
Microsoft’s “investment” is mostly cloud compute credits on a giga scale. OpenAI has pretty much free rein of every otherwise unallocated azure GPU host, and a lot of hardware spun up just for this purpose.

If Microsoft considers this action a breach of their agreement, they could shut off access tomorrow. Every OpenAI service would go offline.

There are very few services that would be able to backfill that need for GPU compute, and after this clusterfuck not a single one would want to invest their own operating dollars supporting OpenAI. Microsoft has OpenAI by the balls.

replies(1): >>s3p+HY1
◧◩◪◨⬒
150. curtis+gl[view] [source] [discussion] 2023-11-19 00:50:26
>>xcv123+Va
That a military AI helps to kill enemies doesn't look particularly "unsafe" to me, at least not more "unsafe" than a fighter jet or an aircraft carrier is; they're all complex systems accurately designed to kill enemies in a controlled way; killing people is the whole point of their existence, not an unwanted side effect. If, on the other hand, a military AI starts autonomously killing civilians, or fighting its "handlers", then I would call it "unsafe", but nobody has ever been able to demonstrate an "unsafe" AI of any kind according to this definition (so far).
replies(1): >>chasd0+Xr
◧◩◪
151. deevia+hl[view] [source] [discussion] 2023-11-19 00:50:32
>>static+Q8
Because Meta or Google or Apple or Facebook don't recruit the best in the field?

All who are a year plus behind OpenAI.

◧◩◪◨⬒
152. mlyle+ml[view] [source] [discussion] 2023-11-19 00:50:42
>>naremu+Ed
One key reason past performance cannot be used to predict future returns is because market expectations tend to price in expected future returns. Also, nothing competitive is expected to generate economic profit forever— in the long run things even out. In the long run, firms and stock pickers usually end up with normal profit.

But that doesn’t mean you can’t get some useful ideas about future performance from a person’s past results compared to other humans. There is no such effect in play here.

Otherwise, time for me to go beat Steph Curry in a shooting contest.

Of course there’s other reasons past performance is imperfect as a predictor. Fundamentals can change, or the past performance could have been luck. Maybe Steph’s luck will run out, or maybe this is the day he will get much worse at basketball, and I will easily win.

◧◩◪◨
153. deevia+Dl[view] [source] [discussion] 2023-11-19 00:52:06
>>spacem+da
AGI is not ASI.
◧◩◪◨⬒
154. mv4+Ul[view] [source] [discussion] 2023-11-19 00:53:44
>>sudosy+3g
Looks like he was hired.

https://www.nytimes.com/2018/04/19/technology/artificial-int...

◧◩◪◨⬒
155. adastr+3m[view] [source] [discussion] 2023-11-19 00:54:44
>>SllX+M8
All of them are when they become national security concerns. The executive branch could write the OpenAI board a letter directing them on what to do if it were a national security need. This has been done many times before, though usually limited to the defense industry in wartime, but as Snowden has showed it has been done in tech as well.
replies(1): >>SllX+Tz
◧◩◪
156. mv4+om[view] [source] [discussion] 2023-11-19 00:56:42
>>static+Q8
No, they recruited top talent by provided top pay.

From 2016: https://www.nytimes.com/2018/04/19/technology/artificial-int...

To 2023: https://www.businessinsider.com/openai-recruiters-luring-goo...

◧◩◪◨⬒
157. felixg+Pm[view] [source] [discussion] 2023-11-19 01:00:22
>>gkober+Zg
the rules would definitely change. Would you want a popped collar fail-upwards guy who creates a crypto scam to be part of the rule making structure, or would you prefer that not to be the case?
◧◩◪◨⬒
158. Amezar+6n[view] [source] [discussion] 2023-11-19 01:01:38
>>threes+f8
Yes, in other words, AI is only safe when it repeats only the ideology of AI safetyists as gospel and can be used only to reinforce the power of the status quo.
replies(1): >>chasd0+Mt
◧◩◪◨
159. renewi+an[view] [source] [discussion] 2023-11-19 01:02:14
>>ren_en+d5
These things happen. ICANN controls DNS deeply and they were trying to sell off .org and you know what stopped them? California’s AG has some authority on non-profits in California.

That’s right. Worldwide DNS control and it was controlled by a non-profit in California. And that non-profit tried to do something shady and was kept in line simply because of California law enforcement.

◧◩◪◨
160. MVisse+0o[view] [source] [discussion] 2023-11-19 01:08:56
>>Meekro+N5
You should read the safety paper of GPT-4. It can easily manipulate humans to attains it goals.
replies(1): >>mattkr+pJ
◧◩◪◨⬒⬓⬔⧯
161. resour+bo[view] [source] [discussion] 2023-11-19 01:09:45
>>autoex+Nk
I fully agree with that. But if you read this thread or any other recent HN thread, you will see "AGI... AGI... AGI" as if it's a real thing. The whole openai debacle with firing/rehiring sama revolves around (non-existent) "AGI" and its imaginary safety/unsafety, and if you dare to question this whole narrative, you will get beaten up.
◧◩◪◨⬒⬓⬔
162. s1arti+wp[view] [source] [discussion] 2023-11-19 01:20:59
>>resour+Wj
I dont think it has a fixed definition. It is an ambiguous idea that AI will not do or lead to bad things.
◧◩◪◨
163. chasd0+dr[view] [source] [discussion] 2023-11-19 01:35:06
>>Meekro+N5
The “safety” they’re talking about isn’t about actual danger but more like responses that don’t comply with the political groupthink de jour.
◧◩◪◨⬒⬓
164. chasd0+Xr[view] [source] [discussion] 2023-11-19 01:41:08
>>curtis+gl
> If, on the other hand, a military AI starts autonomously killing civilians, or fighting its "handlers", then I would call it "unsafe"

So is “unsafe” just another word for buggy then?

replies(1): >>curtis+ml1
◧◩◪◨
165. chasd0+Is[view] [source] [discussion] 2023-11-19 01:45:52
>>btown+j5
> must be evaluated extensively for safety before being released to the publIc

JFC someone somewhere define “safety”! Like wtf does it mean in the context of a large language model?

◧◩◪◨
166. lepton+ft[view] [source] [discussion] 2023-11-19 01:49:05
>>valian+R4
I've been deeply "in tech" for 40 years, and never heard of Sam Altman until he was fired from OpenAI. "Tech" isn't one thing though, it's a very diverse thing with many different areas of interest. I'm not really that interested in AI, so no, I'm not going to care who the players are in that arena. My interests lie in other "tech".
◧◩◪◨⬒⬓
167. chasd0+Mt[view] [source] [discussion] 2023-11-19 01:53:40
>>Amezar+6n
Yeah that’s what I thought. This undefined ambiguous use of the word “safety” does real damage to the concept and things that are indeed dangerous and need to be made more safe.
◧◩◪◨⬒
168. valian+Lv[view] [source] [discussion] 2023-11-19 02:05:01
>>Der_Ei+ca
Generative AI's ubiquity has nothing to do with Sam Altman's noteriety. People can know what the former is without needing to know the latter. It's not as though he relishes in celebrity like other famous CEO's (Musk).
◧◩◪
169. sumedh+Tv[view] [source] [discussion] 2023-11-19 02:05:18
>>gkober+c3
> I mean, they're (allegedly) trying to get him to come back 24 hours later.

Could be a rumour spread by people close to Sam though.

◧◩
170. chasd0+kw[view] [source] [discussion] 2023-11-19 02:08:35
>>jwnin+i7
lol yeah come between Microsoft, their money, and an opportunity to knee cap google. What could go wrong?

Maybe the board is too young to realize who they sold their souls to. Heh I think they’re quickly finding out.

◧◩◪◨⬒
171. zer0c0+xw[view] [source] [discussion] 2023-11-19 02:09:37
>>sudosy+3g
Elon brought him in, which is quite the irony. Funny even. It also is the reason Elon and Larry Page don’t get along anymore.

Ilya is certainly world class in his field, and maybe good to listen to what he has to say

◧◩◪◨
172. jacque+Rw[view] [source] [discussion] 2023-11-19 02:11:52
>>foursi+z6
Indeed, this is the real damage.
◧◩◪◨⬒⬓
173. xcv123+Dx[view] [source] [discussion] 2023-11-19 02:16:00
>>Meekro+of
Strictly obeying their overlords. Ensuring that we don't end up with Skynet and Terminators.
◧◩◪
174. zaptre+Nx[view] [source] [discussion] 2023-11-19 02:16:37
>>huevos+i3
I don't think anyone has reported an end to scaling laws yet.
◧◩◪◨
175. macOSC+Ux[view] [source] [discussion] 2023-11-19 02:16:53
>>Meekro+N5
An Uber self-driving car killed a person.
◧◩
176. chasd0+uy[view] [source] [discussion] 2023-11-19 02:19:50
>>x0x0+83
> They've def got the A team running things... my god.

Yeah prompting ChatGPT 3.5 would have yielded a better plan than what they did.

◧◩◪◨
177. sib+bz[view] [source] [discussion] 2023-11-19 02:23:15
>>valian+R4
"sib's Mom" (78 yo, retired Spanish professor) enters the chat. And no, she has no idea what GPT stands for.
◧◩◪◨⬒⬓
178. SllX+Tz[view] [source] [discussion] 2023-11-19 02:28:46
>>adastr+3m
Except that is literally not true and the Government loses in court to private citizens and corporations all the time because surprise: people in America have rights and that extends to their businesses.

In wartime, pandemics, and in matters of national security, the government's power is at its apex, but pretty much all of that has to withstand legal challenge. Even National Security Letters have their limits: they're an information gathering tool, the US Government can't use them to restructure a company and the structure of a company is not a factor in its ability to comply with the demands of an NSL.

replies(1): >>adastr+aL
◧◩◪◨⬒⬓⬔⧯▣
179. qwytw+cF[view] [source] [discussion] 2023-11-19 03:00:48
>>femiag+6f
> The push for the bomb was an international arms race — America against Russia

Was it? US (and initially UK) didn't really face any real competition at all until the war was already over and they had the bomb. The Soviets then just stole American designs and iterated on top of them.

replies(1): >>femiag+0K
◧◩◪◨
180. sainez+sH[view] [source] [discussion] 2023-11-19 03:16:44
>>Jensso+s8
There is no world in which Microsoft leaves their GPT4 customers dead in the water.
◧◩◪◨⬒⬓
181. sainez+4I[view] [source] [discussion] 2023-11-19 03:22:27
>>adastr+Ej
And with well-intentioned people who tried to warn people of catastrophes that went unheeded
◧◩◪◨⬒
182. mattkr+pJ[view] [source] [discussion] 2023-11-19 03:31:55
>>MVisse+0o
Does it have goals beyond “find a likely series of tokens that extends the input?”

Is the idea that it will hack into NORAD and a launch a first-strike to increase the log-likelihood of “WWIII was begun by…?”

replies(1): >>Davidz+k41
◧◩◪◨⬒⬓⬔⧯▣▦
183. femiag+0K[view] [source] [discussion] 2023-11-19 03:36:22
>>qwytw+cF
You know that now, with the benefit of history. At the time the fear of someone else developing the bomb first was real, and the Soviet Union knew about the Manhattan project: https://www.atomicarchive.com/history/cold-war/page-9.html.
replies(1): >>qwytw+cA1
◧◩◪◨⬒⬓⬔
184. adastr+aL[view] [source] [discussion] 2023-11-19 03:43:15
>>SllX+Tz
The PATRIOT act extended the wartime powers act to apply in peacetime, and there are other more obscure authorizations that could be used. I used to work in the defense industry. It was absolutely common knowledge that the government could step in to nationalize control (though not the profits of) private industry when required. This has been done in particular when there are rare resources needed for supersonic then stealth technology during the Cold War, and uranium in the 40’s and 50’s.
◧◩◪◨
185. enerva+QN[view] [source] [discussion] 2023-11-19 03:59:51
>>valian+R4
My 60-year-old mom isn't tech savvy and always asks me for help with her computer. You wouldn't expect her to know about Sam Altman, but she's actively sending me articles about this fiasco.
◧◩◪◨⬒⬓
186. Gigabl+1V[view] [source] [discussion] 2023-11-19 04:57:16
>>laidof+x8
This applies equally to their detractors.
◧◩◪◨⬒⬓
187. spacem+801[view] [source] [discussion] 2023-11-19 05:43:57
>>cthalu+fh
How can a computer acquire a human retina and, say, driver’s license for generating the identity hash?
◧◩
188. lotsow+C01[view] [source] [discussion] 2023-11-19 05:47:40
>>Grimbl+Ca
Which part is “better for humanity”?
◧◩◪
189. meowti+z11[view] [source] [discussion] 2023-11-19 06:00:45
>>naremu+J8
The case for Sam is the success of OpenAI while Sam was CEO. If the status quo is wild success, then keeping the status quo is a good thing.
replies(1): >>Davidz+d21
◧◩◪◨
190. Davidz+d21[view] [source] [discussion] 2023-11-19 06:07:24
>>meowti+z11
The company's goal is not your definition of success
◧◩◪◨
191. int_19+J21[view] [source] [discussion] 2023-11-19 06:12:46
>>tempes+xa
How many employees have actually quit?

And how many of them work on the models?

◧◩◪◨⬒⬓
192. Davidz+B31[view] [source] [discussion] 2023-11-19 06:23:09
>>smegge+0c
Can a superintelligence ever be merely a tool?
replies(1): >>smegge+jz1
◧◩◪◨⬒⬓
193. Davidz+S31[view] [source] [discussion] 2023-11-19 06:26:01
>>kspace+Oa
This sort of prediction is by its nature speculative. The argument is not--or should not--be certain doom. But rather that the uncertainty on outcomes is so large that even extreme tails has nontrivial weight
◧◩◪◨⬒⬓
194. Davidz+k41[view] [source] [discussion] 2023-11-19 06:31:30
>>mattkr+pJ
I think this is misguided. There can be goals internal to the system which do not arise from goals of the external system. For example, when simulating a chess game, it (behaves identically to) has a goal of winning the game. This is not a written expressed goal but is emergent. Like the goals of a human are emergent from the biological system which on the cellular level have very different goals
◧◩◪◨⬒⬓
195. arisAl+sg1[view] [source] [discussion] 2023-11-19 08:37:51
>>laidof+x8
So in a vaccum if top experts telling you X is Y and you without being top expert yourself if you had to choose you would chose that they are high and not that you misunderstood something?
replies(1): >>laidof+DPt
◧◩◪◨
196. camden+ug1[view] [source] [discussion] 2023-11-19 08:38:15
>>spacem+da
They’ve redefined “safe” in this context to mean “conformant to fashionable academic dogma”
◧◩◪◨⬒⬓
197. arisAl+Eg1[view] [source] [discussion] 2023-11-19 08:39:23
>>smegge+0c
You probably never read I robot from Asimov?
replies(1): >>smegge+Uw1
◧◩◪◨⬒⬓
198. arisAl+Pg1[view] [source] [discussion] 2023-11-19 08:40:39
>>adastr+3k
Actually the majority of the VA top current. That is Ilya, hassabis, anthropic, Bengio, Hinton. 3 top labs? 3 same views.
◧◩◪◨⬒⬓⬔
199. curtis+ml1[view] [source] [discussion] 2023-11-19 09:24:41
>>chasd0+Xr
Buggy in a way that harms unintended targets, yes.
◧◩◪◨⬒⬓⬔
200. smegge+Uw1[view] [source] [discussion] 2023-11-19 11:11:01
>>arisAl+Eg1
On the contrary. I can safely say I have read litterally dozens of his book, both fiction and nonfiction, also have read countless short stories and many of his essays. He is one of my all time favorite writers actually.
replies(1): >>arisAl+wh2
◧◩◪◨⬒⬓⬔
201. smegge+jz1[view] [source] [discussion] 2023-11-19 11:38:18
>>Davidz+B31
If it has no motivation and drives of its own yeah why not. AI wont have a "psychology" anything like our own it wont feel pain it wont feel emotions it wont feel biological imparetives. All it will have is its programming /training to do what its been told. Neural nets that dont produce the right outcomes will be trained and reweighted until they do.
◧◩◪◨⬒⬓⬔⧯▣▦▧
202. qwytw+cA1[view] [source] [discussion] 2023-11-19 11:46:03
>>femiag+0K
Isn't this mainly about what happened after the war and developing then hydrogen bomb? Did anyone seriously believe during WW2 that the Nazis/Soviets could be the first to develop a nuclear weapon (I don't really know to be fair)?
replies(1): >>femiag+iYa
◧◩◪◨⬒⬓⬔
203. smegge+UB1[view] [source] [discussion] 2023-11-19 12:01:42
>>IanCal+7g
Honestly we already have paperclip maximizers they are called corporations. Instead of paperclips they are maximizing for shortterm shareholder value.
◧◩◪
204. huyter+xF1[view] [source] [discussion] 2023-11-19 12:31:55
>>spacem+6b
So selling your life away for relief from recaptchas. I think they had to pay starving sub Saharan Africans more than that to get them to sign up.
◧◩
205. rvba+cH1[view] [source] [discussion] 2023-11-19 12:48:09
>>mycolo+n2
News about ousting of Altmann was on first page of BBC.
◧◩◪◨⬒
206. bob_th+iN1[view] [source] [discussion] 2023-11-19 13:41:34
>>naremu+wb
That is something that Sam Altman did with his own money. And it's fair he's criticized for his choices, but that has nothing to do with his role at Open AI.
◧◩
207. bob_th+zN1[view] [source] [discussion] 2023-11-19 13:44:35
>>mycolo+n2
Sam Altman was an integral part of Y combinator, who runs this site.

https://en.m.wikipedia.org/wiki/Sam_Altman

◧◩◪◨⬒⬓
208. s3p+HY1[view] [source] [discussion] 2023-11-19 15:04:58
>>adastr+3l
Microsoft has Microsoft by the balls. They just integrated GPT4 with their browser, search engine, and _desktop operating system_. It would be a mess to suddenly take all this functionality out. They have too much to lose by turning off compute for OpenAI.
◧◩
209. s3p+7Z1[view] [source] [discussion] 2023-11-19 15:06:59
>>brigad+42
>Also, all the employees are being paid with PPUs which is a share in future profits, and now they find out that actually, the company doesn't care about making profit!

Maybe. But on their investing page it literally says to consider an OpenAI investment as a "donation" as it is very high risk and will likely not pay off. Everyone knew this going into it.

◧◩◪◨⬒
210. throwa+a42[view] [source] [discussion] 2023-11-19 15:36:09
>>threes+f8
That's not really a great encapsulation of the AI safety that those who think AGI poses a thread to humanity are referring to.

The bigger concern is something like Paperclip Maximizer. Alignment is about how to ensure that a super intelligence has the right goals.

◧◩◪◨⬒⬓⬔⧯
211. arisAl+wh2[view] [source] [discussion] 2023-11-19 16:46:09
>>smegge+Uw1
and what you got from the I Robot stories is that there is zero probability of danger? Fascinating
replies(1): >>smegge+WA2
◧◩◪◨⬒⬓⬔⧯▣
212. smegge+WA2[view] [source] [discussion] 2023-11-19 18:07:13
>>arisAl+wh2
none of the stories in i robot i can remember feature the robots intentionally harming humans/humanity most of them are essentially stories of a few robot technicians trying to debug unexpected behaviors resulting form conflicting directives given to the robots. so yeah. you wouldn't by chance be thinking of that travesty of a movie that shares only a name in common with his book and seemed to completely misrepresent his take on ai?

Thought to be honest in my original post I was more thinking of Asimov's nonfiction essays on the subject. I recommend finding a copy of "Robot Visons" if you can. Its a mixed work of fictional short stories and nonfiction essays including several on the subject of the three laws and on the Frankenstein Complex.

replies(1): >>arisAl+tU4
◧◩◪◨⬒⬓⬔⧯▣▦
213. arisAl+tU4[view] [source] [discussion] 2023-11-20 08:19:08
>>smegge+WA2
Again "they will be aligned with us because they designed such that their motivation will be to serve us." If you got this outcome from reading I robot either you should reread them because obviously it was decades ago or you build your own safe reality to match your arguments. Usually it's the latter.
replies(1): >>smegge+uX7
◧◩◪◨⬒⬓⬔⧯▣▦▧
214. smegge+uX7[view] [source] [discussion] 2023-11-20 23:06:33
>>arisAl+tU4
And yet again I didn't get it from I Robot, I got it from Asimov's NON-fiction writing which I referenced in my previous post. Even if it had gotten it based on his fictional works, which again I didn't, the majority of his robot centric novels (caves of steal, naked sun, robots of dawn, robots and empire, prelude to foundation, forward the foundation, second foundation trilogy etc all feature benevolent AIs aligned with humanity.
◧◩◪◨⬒
215. juped+zpa[view] [source] [discussion] 2023-11-21 16:18:57
>>naremu+Ed
No dotcom fortune, just a failed startup that lost its investors money assuming it ever had an expense in its lifetime. OpenAI might in fact be the first time Altman has been in the vicinity of an object-level success; it depends on how you interpret his tenure at YC.
◧◩◪◨⬒⬓⬔⧯▣▦▧▨
216. femiag+iYa[view] [source] [discussion] 2023-11-21 18:26:57
>>qwytw+cA1
A lot of it happened after the war, but the Nazis had their own nuclear program that was highly infiltrated by the CIA, and whose progress was tracked against. Considering how late Teller's mechanism for detonation was developed, the race against time was real.
◧◩◪◨⬒⬓⬔
217. laidof+DPt[view] [source] [discussion] 2023-11-27 23:54:16
>>arisAl+sg1
Correct, because experts in one domain are not immune to fallacious thinking in an adjacent one. Part of being an expert is communicating to the wider public, and if you sound as grandiose as some of the AI doomers to the layman you've failed already.
[go to top]