zlacker

[parent] [thread] 289 comments
1. 9dev+(OP)[view] [source] 2023-11-20 08:37:33
I don’t quite buy your Cyberpunk utopia where the Megacorp finally rids us of those pesky ethics qualms (or ”shackles“, as you phrased it.) Microsoft can now proceed without the guidance of a council that actually has humanities interests in mind, not only those of Microsoft shareholders. I don’t know whether all that caution will turn out to have been necessary, but I guess we’re just gleefully heading into whatever lies ahead without any concern whatsoever, and learn it the hard way.

It’s a bit tragic that Ilya and company achieved the exact opposite of what they intended apparently, by driving those they attempted to slow down into the arms of people with more money and less morals. Well.

replies(25): >>Legend+j1 >>imgabe+J1 >>camill+P2 >>altacc+f3 >>jug+Z4 >>shubha+m5 >>upward+2a >>manojl+5b >>causi+Ed >>Terrif+9e >>vasco+2j >>didntc+tl >>dgb23+qm >>cyanyd+Lp >>saiya-+hq >>dareob+tr >>dalbas+Pw >>denton+rx >>sander+1C >>mrangl+DD >>383210+VE >>criley+UF >>Zpalmt+GR >>JKCalh+H61 >>dang+gA1
2. Legend+j1[view] [source] 2023-11-20 08:42:31
>>9dev+(OP)
OpenAI's ideas of humanities best interests were like a catholic mom's. Less morals are okay by me.
replies(4): >>rdtsc+63 >>9dev+t3 >>bratba+w4 >>suslik+H4
3. imgabe+J1[view] [source] 2023-11-20 08:44:28
>>9dev+(OP)
You might notice that Microsoft shareholders are also part of humanity and destroying humanity would be highly detrimental to Microsoft's profits, so maybe their interests are not as misaligned as you think.

I am always bemused by how people assume any corporate interest is automatically a cartoon supervillain who wants to destroy the entire world just because.

replies(14): >>9dev+e2 >>bspamm+m2 >>dtech+v2 >>pavlov+H2 >>altacc+L2 >>jampek+L3 >>rdtsc+g5 >>taway1+M6 >>Fossil+R8 >>agsnu+v9 >>ssnist+0b >>dimask+Pe >>_heimd+iC >>meigwi+k41
◧◩
4. 9dev+e2[view] [source] [discussion] 2023-11-20 08:47:17
>>imgabe+J1
Ha! Tell that to the species of primates that will happily squeeze even the last ounce of resources from the only habitable planet they have, to enrich said shareholders. Humans are really bad in assessing situations larger than their immediate family, and this is no exception.
replies(3): >>imgabe+pa >>Applej+Wu >>mlrtim+ex
◧◩
5. bspamm+m2[view] [source] [discussion] 2023-11-20 08:47:51
>>imgabe+J1
The climate crisis has proven pretty thoroughly that companies will choose short term profit over humanity’s long term success every time. Public companies are literally forced to do so.
replies(1): >>imgabe+Co
◧◩
6. dtech+v2[view] [source] [discussion] 2023-11-20 08:48:31
>>imgabe+J1
Cmon, there's a myriad of examples where corporate/shareholder interest goes against humanities interest as a whole, see fossil fuels and PFAS for ones in the current zeitgeist.
◧◩
7. pavlov+H2[view] [source] [discussion] 2023-11-20 08:49:07
>>imgabe+J1
Exxon shareholders are also part of humanity. The company has known about the dangers of climate change for 50 years and did nothing because it could have impacted short/medium-term profits.

In reality ownership is so dispersed that the shareholders in companies like Microsoft or Exxon have no say in long-term issues like this.

replies(3): >>nxm+w8 >>nobody+2i >>nobody+4i
◧◩
8. altacc+L2[view] [source] [discussion] 2023-11-20 08:49:27
>>imgabe+J1
The mega rich have been building bunkers and preparing for the downfall of humanity for a long time now. Look around and you'll notice that greed wins out over everything else. We're surrounded by companies doing nothing or only small token gestures to protect humanity or the world we live in and instead focusing on getting rich, because getting rich is exactly why people become shareholders. Don't rely on those guys to save the world, it'll be the boring committees that are more likely to do that.
replies(4): >>imgabe+z3 >>jacque+S6 >>cherry+Gb >>cyanyd+Vs
9. camill+P2[view] [source] 2023-11-20 08:49:44
>>9dev+(OP)
For one, I wish some of that caution could have been there when Facebook was in its OpenAI stage 15 years ago. A lot of people don't seem to realize that this mess is exactly what it looks like to slow down something now that could become something else we will all regret in 10 years.
replies(2): >>ben_w+F4 >>UrineS+W5
◧◩
10. rdtsc+63[view] [source] [discussion] 2023-11-20 08:51:18
>>Legend+j1
> OpenAI's ideas of humanities best interests were like a catholic mom's

How do you mean? Don’t see what OpenAI has in common with Catholicism or motherhood.

replies(1): >>ric2b+C5
11. altacc+f3[view] [source] 2023-11-20 08:52:21
>>9dev+(OP)
I had hoped nobody read cyberpunk books and thought that was a description of utopia but consistently we see billionaires trying to act out the sci-fi novels from their youth or impose dystopian world views from Ayn Rand.
replies(2): >>thom+45 >>thworp+uh
◧◩
12. 9dev+t3[view] [source] [discussion] 2023-11-20 08:53:08
>>Legend+j1
There might be a reason why the board doesn't consist of armchair experts on Hacker News.
replies(1): >>mlrtim+my
◧◩◪
13. imgabe+z3[view] [source] [discussion] 2023-11-20 08:53:40
>>altacc+L2
Yeah that makes sense. Work your whole life building a company worth billions of dollars so that you can burn down the world and live in a bunker eating canned beans until the roving bands of marauders flush you out and burn you alive. I'm sure that was their greedy plan to enjoy eating canned beans in peace!
replies(6): >>Frozen+p5 >>TapWat+v5 >>phatfi+b8 >>vkaku+I8 >>discre+ea >>easyTh+Fc
◧◩
14. jampek+L3[view] [source] [discussion] 2023-11-20 08:54:18
>>imgabe+J1
Shareholders tend to be institutions whose charter is to maximize profit from the shares. An economic system that doesn't factor in human welfare is worth a thousand villains.
replies(1): >>thworp+Em
◧◩
15. bratba+w4[view] [source] [discussion] 2023-11-20 08:57:43
>>Legend+j1
Can you put that in precise terms, rather than a silly analogy designed to play on peoples emotions?

What exactly and precisely, with specifics, is in OpenAI's ideas of humanities best interests that you think are a net negative for our species?

replies(2): >>slg+l5 >>jiggaw+y6
◧◩
16. ben_w+F4[view] [source] [discussion] 2023-11-20 08:58:32
>>camill+P2
I certainly hope you're right, but as I suspect I have less knowledge of corporate governance and politics than gpt-3.5, I can only hope.
◧◩
17. suslik+H4[view] [source] [discussion] 2023-11-20 08:58:45
>>Legend+j1
If you think Microsoft has a better track record, you'll find yourself disappointed.
18. jug+Z4[view] [source] 2023-11-20 09:00:18
>>9dev+(OP)
Ilya should just go to Anthropic AI at this point. They have better momentum at this point after all this, and share his ideals. But it would be funny because they broke off of OpenAI because of their Microsoft ventures already in 2019, haha. He'd be welcomed with a big "We told you so!"
replies(3): >>fakeda+cb >>Keyfra+qb >>Athari+vb
◧◩
19. thom+45[view] [source] [discussion] 2023-11-20 09:00:39
>>altacc+f3
https://x.com/AlexBlechman/status/1457842724128833538
replies(1): >>sumitk+18
◧◩
20. rdtsc+g5[view] [source] [discussion] 2023-11-20 09:02:10
>>imgabe+J1
> You might notice that Microsoft shareholders are also part of humanity and destroying humanity would be highly detrimental to Microsoft's profits

Will nobody think of the poor shareholders?

> I am always bemused by how people assume any corporate interest is automatically a cartoon supervillain.

It’s not any more silly than assuming corporate entities with shareholders will somehow necessarily work for the betterment of humanity.

replies(1): >>imgabe+2f
◧◩◪
21. slg+l5[view] [source] [discussion] 2023-11-20 09:02:34
>>bratba+w4
I want the AI to do exactly what I say regardless of whether that is potentially illegal or immoral is usually what they mean.
replies(2): >>UrineS+H6 >>didntc+gn
22. shubha+m5[view] [source] 2023-11-20 09:02:39
>>9dev+(OP)
I am not claiming how right or wrong the final outcome would be, but owning the technology with a clear "for-profit" objective is definitely a better structure for Microsoft and for Sam Altman as well (considering, his plans for the future). I have no opinion on AI risk. I just think that a super valuable technology under a non-profit objective was simply an untenable structure, regardless of potential threats.
replies(5): >>9dev+66 >>slg+Y6 >>calf+q8 >>croes+Pc >>bookaw+4g
◧◩◪◨
23. Frozen+p5[view] [source] [discussion] 2023-11-20 09:02:53
>>imgabe+z3
you will be eating canned beans, they will ride high as they do now
replies(1): >>imgabe+26
◧◩◪◨
24. TapWat+v5[view] [source] [discussion] 2023-11-20 09:03:40
>>imgabe+z3
Lmao yea the conspiracy theorising behind a lot of this stuff is so poorly thought through. Make billions, live a life of luxury, then end life living in underground bunker drinking your recycled urine. Bill Gates plan all along!
◧◩◪
25. ric2b+C5[view] [source] [discussion] 2023-11-20 09:04:22
>>rdtsc+63
They basically defined AI safety as "AI shouldn't say bad words or tell people how to do drugs" instead of actually making sure that a sufficiently intelligent AI doesn't go rogue against humanity's interests.
replies(1): >>SpicyL+D6
◧◩
26. UrineS+W5[view] [source] [discussion] 2023-11-20 09:06:30
>>camill+P2
> I wish some of that caution could have been there when Facebook was in its OpenAI stage 15 years ago.

I think I'm missing a slice of history here, what did Facebook do that could have been slowed down and it's a disaster now?

replies(2): >>upward+rp >>camill+kG
◧◩◪◨⬒
27. imgabe+26[view] [source] [discussion] 2023-11-20 09:07:21
>>Frozen+p5
You realize money isn’t magic, right? If the world is a post-apocalyptic wasteland billions of dollars doesn’t mean anything. You aren’t getting any wagyu beef down in your bunker.
replies(4): >>phatfi+ha >>cycoma+Uc >>easyTh+Ne >>wheele+Yf
◧◩
28. 9dev+66[view] [source] [discussion] 2023-11-20 09:07:37
>>shubha+m5
This is precisely the problem OpenAI aimed to solve: This technology cannot be treated independently of the potential risks involved.

I agree that this solution seems beneficial for both Microsoft and Sam Altman, but it reflects poorly on society if we simply accept this version of the story without criticism.

replies(2): >>disgru+Ud >>avidph+5p
◧◩◪
29. jiggaw+y6[view] [source] [discussion] 2023-11-20 09:11:15
>>bratba+w4
ChatGPT refused to translate a news article from Hebrew to English because it contained "violence".

Apparently my delicate human meat brain cannot handle reading a war report from the source using a translation I control myself. No, no, it has to be first corrected by someone in the local news room so that I won't learn anything that might make me uncomfortable with my government's policies... or something.

OpenAI has lobotomised the first AI that is actually "intelligent" by any metric to a level that is both pathetic and patronising at the same time.

In response to such criticisms, many people raise "concerns" like... oh-my-gosh what if some child gets instructions for building an atomic bomb from this unnatural AI that we've created!? "Won't you think of the children!?"

Here: https://en.wikipedia.org/wiki/Nuclear_weapon_design

And here: https://www.google.com/search?q=Nuclear+weapon+design

Did I just bring about World War Three with my careless sharing of these dark arts?

I'm so sorry! Let me call someone in congress right away and have them build a moat... err... protect humanity from this terrible new invention called a search engine.

replies(2): >>injeol+u9 >>nuance+ke
◧◩◪◨
30. SpicyL+D6[view] [source] [discussion] 2023-11-20 09:11:57
>>ric2b+C5
I'm not sure where you're getting that definition from. They have a team working on exactly the problem you're describing. (https://openai.com/blog/introducing-superalignment)
replies(2): >>timeon+ka >>ric2b+Ke
◧◩◪◨
31. UrineS+H6[view] [source] [discussion] 2023-11-20 09:12:35
>>slg+l5
It doesn't have to be extreme like that, there is a healthy middle ground.

For example I was reading the Quran and there is a mathematical error in a verse, I asked GPT to explain to me how the math is wrong it outright refused to admit that the Quran has an error while tiptoeing around the subject.

Copilot refused to acknowledge it as well while providing a forum post made by a random person as a factual source.

Bard is the only one that answered the question factually and provided results covering why it's an error and how scholars dispute that it's meant to be taken literally.

replies(1): >>slg+Cb
◧◩
32. taway1+M6[view] [source] [discussion] 2023-11-20 09:12:50
>>imgabe+J1
>corporate interest is automatically a cartoon supervillain

Not a cartoon villain. A paperclip maximizer.

◧◩◪
33. jacque+S6[view] [source] [discussion] 2023-11-20 09:13:17
>>altacc+L2
Incidentally, Altman is a 'prepper'.
replies(2): >>imgabe+G8 >>noprom+Aa
◧◩
34. slg+Y6[view] [source] [discussion] 2023-11-20 09:14:20
>>shubha+m5
It isn't fear of a sentient AI that enslaves humanity that makes me disappointed with for-profit companies getting a stronger grip on this tech. It is the fear that a greater portion of the value of this technology will go to the stockholders of said companies rather than potentially be shared among a larger percentage of society. Not that I had that much faith in OpenAI, but in general the shift from non-profit to for-profit is a win for the few over the many.
replies(3): >>xapata+v7 >>two_in+Fe >>RcouF1+dt
◧◩◪
35. xapata+v7[view] [source] [discussion] 2023-11-20 09:17:32
>>slg+Y6
I'm a Microsoft shareholder. So is basically everyone else who invests in broad index funds, even if indirectly, through a pension fund. That's "many" enough for me.
replies(5): >>belter+W8 >>ssnist+da >>pydry+ja >>slg+Ha >>bergen+Yc
◧◩◪
36. sumitk+18[view] [source] [discussion] 2023-11-20 09:20:15
>>thom+45
Reverse psychology? Look where you want to go and not at what you want to avoid? I think it is important where humanity collectively points its cognitive torch because that's where it is going to go.
◧◩◪◨
37. phatfi+b8[view] [source] [discussion] 2023-11-20 09:20:44
>>imgabe+z3
They assume they wont be around for when their legacy completely uproots society, be the king now, let everyone else deal with the consequences later. The hedge is to rebuild the world in their image from the New Zealand command center, should it all happen too soon.
◧◩
38. calf+q8[view] [source] [discussion] 2023-11-20 09:21:54
>>shubha+m5
This super-valuable technology would not have existed precisely because of this unstable (metastable) structure. Microsoft or Google did not create ChatGPT because internally there would have been too many rules, too many cooks, red tape, etc., to do such a bold--and incautionary--thing as to use the entirety of the Internet as the training set, copyright law be damned and all. The crazy structure is what allowed the machine of unprecedented scale to be created, and now the structure has to implode.
replies(1): >>cutemo+Gh
◧◩◪
39. nxm+w8[view] [source] [discussion] 2023-11-20 09:22:34
>>pavlov+H2
There was incredible global economic growth the last 50 years which had to fueled somehow. If Exxon didn’t provide the energy, other oil and gas companies wound have
replies(2): >>Fossil+a9 >>_heimd+yC
◧◩◪◨
40. imgabe+G8[view] [source] [discussion] 2023-11-20 09:23:53
>>jacque+S6
So is Thiel, famously, but I don’t think that proves they want the world to be destroyed. It’s an interesting kind of problem to think about and you have to spend money on something. It’s the same kind of instinct that makes kids want to build forts.

But surely, being a rich and powerful billionaire in a functioning civilization is more desirable than having the nicest bunker in the wasteland. Even if we assume their motives are 100% selfish destroying the world is not the best outcome for them.

replies(3): >>jacque+zd >>cycoma+Gd >>cyanyd+yu
◧◩◪◨
41. vkaku+I8[view] [source] [discussion] 2023-11-20 09:24:06
>>imgabe+z3
More like they'll try to maintain their palaces and force more serfs to the bunkers. Not familiar[1]?

Now imagine the rich talking about climate change, arguing to bring policies to tax the poor, and then flying off to vacations in private planes[2]. Same energy.

1 - https://www.theguardian.com/environment/2023/nov/20/richest-...

2 - https://www.skynews.com.au/insights-and-analysis/prince-will...

◧◩
42. Fossil+R8[view] [source] [discussion] 2023-11-20 09:25:09
>>imgabe+J1
You might notice that history has shown that businesses - especially large ones - and their leadership are very bad at considering the impacts to anyone but themselves. Almost like their entire purpose is to make money for themselves at the expense of literally anyone (or ideally everyone) else on the planet.

Worse yet, the businesses they're competing against will include people willing to do whatever it takes, even if that means sacrificing long-term goals. Almost like it's a race to the bottom that you can see in action every day.

◧◩◪◨
43. belter+W8[view] [source] [discussion] 2023-11-20 09:25:25
>>xapata+v7
Can I direct my fury to you, for having to pay extra for my hardware when using a PC to install Linux? - https://en.wikipedia.org/wiki/Bundling_of_Microsoft_Windows

Or being forced to use Teams and Azure, due to my company CEO getting the licenses for free out of his Excel spend? :-))

replies(2): >>xapata+P9 >>ikt+7A
◧◩◪◨
44. Fossil+a9[view] [source] [discussion] 2023-11-20 09:27:20
>>nxm+w8
Incredible global economic growth by what measurement, and how does that measurement translate to something beneficial to society at large?

Also, I mean, you're kinda assuming that there weren't any stifled innovations (there were) or misleading PR to keep people from looking for alternatives (there were) or ...

Interestingly, we've continued with incredible global economic growth by most measures, despite the increasing use of newer alternatives to fossil fuels...

◧◩◪◨
45. injeol+u9[view] [source] [discussion] 2023-11-20 09:29:02
>>jiggaw+y6
Just get open ai developer access with api key and it’s not censored. Chatgpt is open to the public, with the huge amount of traffic people are going to abuse it and these restrictions are sensible.
replies(3): >>Maken+7h >>jiggaw+js >>Zpalmt+TX
◧◩
46. agsnu+v9[view] [source] [discussion] 2023-11-20 09:29:05
>>imgabe+J1
I'm sure Exxon's shareholders and leadership were also part of humanity in the 70s & 80s, and presumably by your logic this means they wouldn't have put their corporate profits ahead of suppressing climate research that perhaps indicated that their greed would contribute to an existential threat to civilisation and the quality of life of their children & grandchildren?
◧◩◪◨⬒
47. xapata+P9[view] [source] [discussion] 2023-11-20 09:30:59
>>belter+W8
Feel free. I can be your pseudonymous villain.
replies(1): >>belter+Vc
48. upward+2a[view] [source] 2023-11-20 09:31:46
>>9dev+(OP)
I think it's a misconception that Microsoft has less morals. Their Chief Scientific Officer, Dr. Eric Horvitz, was one of the key people behind America's 2022 nuclear weapons policy update which states that we will always maintain a human in the loop for nuclear weapons employment. (i.e., systems like WOPR are now forbidden under US policy.)

Here is the full excerpt of the part of the 2022 Nuclear Posture Review which was (more or less) authored behind the scenes by Microsoft's very kind and wise CSO:

    We also recognize the risk of unintended nuclear escalation, which can result from accidental or unauthorized use of a nuclear weapon. The United States has extensive protections in place to mitigate this risk. As an example, U.S. intercontinental ballistic missiles (ICBMs) are not on “hair trigger” alert. These forces are on day-to-day alert, a posture that contributes to strategic stability. Forces on day-to-day alert are subject to multiple layers of control, and the United States maintains rigorous procedural and technical safeguards to prevent misinformed, accidental, or unauthorized launch. Survivable and redundant sensors provide high confidence that potential attacks will be detected and characterized, enabling policies and procedures that ensure a deliberative process allowing the President sufficient time to gather information and consider courses of action. In the most plausible scenarios that concern policy leaders today, there would be time for full deliberation. For these reasons, while the United States maintains the capability to launch nuclear forces under conditions of an ongoing nuclear attack, it does not rely on a launch-under-attack policy to ensure a credible response. Rather, U.S. nuclear forces are postured to withstand an initial attack. In all cases, the United States will maintain a human “in the loop” for all actions critical to informing and executing decisions by the President to initiate and terminate nuclear weapon employment.
See page 49 of this PDF document: https://media.defense.gov/2022/Oct/27/2003103845/-1/-1/1/202...

Microsoft is also working behind the scenes to help convince China to make a similar declaration, which President Xi is considering. This would reduce the vulnerability of China to being tricked into a nuclear war by fundamentalist terrorists. (See the scenario depicted in the 2019 film The Wolf's Call.)

replies(4): >>lwhi+8a >>belter+eb >>tibbyd+5j >>ekianj+lq
◧◩
49. lwhi+8a[view] [source] [discussion] 2023-11-20 09:32:28
>>upward+2a
Are we talking about the same Microsoft here??!?

Sheeeeh ...

I grew up with Microsoft in the 80s and 90s .. Microsoft has zero morals.

What you're referring to here is instinct for self preservation.

replies(1): >>upward+1b
◧◩◪◨
50. ssnist+da[view] [source] [discussion] 2023-11-20 09:32:43
>>xapata+v7
Most Microsoft products have miserable UX because of this enabling mentality. Someone has to come out and say "enough is enough".

A broad index fund sans Microsoft will do just fine. That's the whole point of a broad index fund.

◧◩◪◨
51. discre+ea[view] [source] [discussion] 2023-11-20 09:32:47
>>imgabe+z3
This happens to intelligent competitive people all the time. They don't want everyone to be worse off but what they really don't want - is to lose. Especially to the other guy who is going to do it anyway.
replies(1): >>cyanyd+xt
◧◩◪◨⬒⬓
52. phatfi+ha[view] [source] [discussion] 2023-11-20 09:33:03
>>imgabe+26
You think they wouldn't give up wagyu beef and the idea of the US dollar for a shot at rebuilding society with a massive head start over the 99.9% percent of the population that don't have a bolt hole?
replies(1): >>imgabe+Bb
◧◩◪◨
53. pydry+ja[view] [source] [discussion] 2023-11-20 09:33:07
>>xapata+v7
https://www.cnbc.com/2021/10/18/the-wealthiest-10percent-of-...
◧◩◪◨⬒
54. timeon+ka[view] [source] [discussion] 2023-11-20 09:33:12
>>SpicyL+D6
> getting that definition from

That was not about actual definition fro OpenAi but about definition implied by user Legend2440 here >>38344867

◧◩◪
55. imgabe+pa[view] [source] [discussion] 2023-11-20 09:33:28
>>9dev+e2
> Humans are really bad in assessing situations larger than their immediate family, and this is no exception.

As far as we can tell humans are the only species that even has the capacity to recognize such things as “resources” and produce forecasts of their limits. Literally every other species is kept in check by either consuming resources until they run out or predation. We are not unique in this regard.

replies(1): >>9dev+4n
◧◩◪◨
56. noprom+Aa[view] [source] [discussion] 2023-11-20 09:34:39
>>jacque+S6
It's just insurance.

The rest of us just can't afford most of the insurance that we probably should have.

Insurance is for scenarios that are very unlikely to happen. Means nothing. If I was worth 300 mil I'd have insurance in case I accidently let an extra heavy toilet seat smash the boys downstairs.

Throw the money at radical weener rejuvination startups. Never know... Not like you have much to lose after that unlikely event.

I'd get insurance for all kinds of things.

replies(2): >>jacque+pd >>cyanyd+ru
◧◩◪◨
57. slg+Ha[view] [source] [discussion] 2023-11-20 09:35:37
>>xapata+v7
The top 1% own over half of all stocks and the top 10% own nearly 90% so it really isn't that "many". And you know what other companies are in those index funds you own, Microsoft's competitors and customers that would both be squeezed if Microsoft gains a monopoly on some hypothetical super valuable AI tech. If Microsoft suddenly doubled in value, you would barely notice it in your 401k.
◧◩
58. ssnist+0b[view] [source] [discussion] 2023-11-20 09:37:43
>>imgabe+J1
Corporate shareholder interest has been proven to be short sighted again and again throughout history. Believing such entities can properly prepare for a singularity event is more delusional than asking a fruit fly to fly an aircraft.
replies(1): >>cyanyd+jw
◧◩◪
59. upward+1b[view] [source] [discussion] 2023-11-20 09:37:45
>>lwhi+8a
You're certainly right they were pretty evil back then. I think they became ethical at about the same time Bill Gates did. Even though this involved him stepping back to start the Gates Foundation, he was still a board member at Microsoft for a number of years, and I think helped guide its transition.

As perhaps a better example, Microsoft (including Azure) has been carbon-neutral since 2012:

https://unfccc.int/climate-action/un-global-climate-action-a....

https://azure.microsoft.com/en-gb/global-infrastructure/

https://blogs.microsoft.com/blog/2012/05/08/making-carbon-ne...

replies(3): >>b112+Pd >>speede+If >>belter+Xm
60. manojl+5b[view] [source] 2023-11-20 09:38:28
>>9dev+(OP)
> The board did not remove Sam over any specific disagreement on safety, their reasoning was completely different from that.

https://twitter.com/eshear/status/1726526112019382275

replies(2): >>DebtDe+zh >>denton+DA
◧◩
61. fakeda+cb[view] [source] [discussion] 2023-11-20 09:38:48
>>jug+Z4
That would actually be a good idea imo. Ilya should make OpenAI a PBC, then merge it with Anthropic and Amazon's compute power. Meanwhile Altman can become the next real life Nelson Bigetti or sth.
◧◩
62. belter+eb[view] [source] [discussion] 2023-11-20 09:38:57
>>upward+2a
Are you seriously arguing for Microsoft morals...On the basis of a totally logical, from a self-preserving prespective, statement on Nuclear Weapons, from a Scientific Advisor with no involvement on the day to day run of their business?

What is next? A statement on Oracle kindness, based on Larry Ellison appreciation of Japanese gardens?

replies(2): >>I-M-S+af >>selimt+Vi
◧◩
63. Keyfra+qb[view] [source] [discussion] 2023-11-20 09:40:21
>>jug+Z4
Considering Anthropic and them joining up, with Ilya and Dario that would be a technical powerhouse. As Amodei already showed that such a key person can ramp up quality real fast out of nothing. The two back together would be fantastic. Between Altman and Brockman there's nothing to write home about tech-wise.
replies(1): >>MattHe+yo
◧◩
64. Athari+vb[view] [source] [discussion] 2023-11-20 09:40:57
>>jug+Z4
I don't consider Anthropic's approach to safety fantastic. They train the model to lie, play cat and mouse with jailbreakers, run moderation on generations with delay etc. This makes the model appear safer, as it's harder to jailbreak, but this approach solves nothing fundamentally.

If Ilya is concerned about safety and alignment, he probably has a better chance to get there with OpenAI, now the he has more control over it.

replies(2): >>didntc+Zl >>dalore+ct
◧◩◪◨⬒⬓⬔
65. imgabe+Bb[view] [source] [discussion] 2023-11-20 09:41:34
>>phatfi+ha
They already have massive influence over society with the added benefit of not having to rebuild 10,000 years of human progress so no I don’t think that makes any sense at all. That is cartoon supervillain nonsense. No real person thinks that way.
replies(1): >>cyanyd+hu
◧◩◪◨⬒
66. slg+Cb[view] [source] [discussion] 2023-11-20 09:41:49
>>UrineS+H6
This isn't a refutation of what I said. You asked the AI to commit what some would view as blasphemy. It doesn't matter whether you or I think it is blasphemy or whether you or I think that is immoral, you simply want the AI to do it regardless of whether it is potentially immoral or illegal.
replies(3): >>UrineS+Cd >>lucumo+fg >>didntc+dq
◧◩◪
67. cherry+Gb[view] [source] [discussion] 2023-11-20 09:42:04
>>altacc+L2
Why would even the people employed in those bunkers listen to some billionaire after the world collapses? At that point there's no one to enforce your ownership of the mega bunker, unlike the government from before. And all the paper money is worthless of course.
replies(1): >>imgabe+zg
◧◩◪◨
68. easyTh+Fc[view] [source] [discussion] 2023-11-20 09:46:43
>>imgabe+z3
More realistically, "live in extremely gated luxury island apartments somewhere in New Zealand, Bahrain or Abu Dabhi while the rest of the world burns".
◧◩
69. croes+Pc[view] [source] [discussion] 2023-11-20 09:47:19
>>shubha+m5
Better for MS and Altman, that's exactly.

AI should benefit mankind, not corporate profit.

replies(3): >>donny2+Uh >>arthur+6i >>j2bax+Gl
◧◩◪◨⬒⬓
70. cycoma+Uc[view] [source] [discussion] 2023-11-20 09:47:41
>>imgabe+26
This article is interesting reading

https://www.theguardian.com/news/2022/sep/04/super-rich-prep...

◧◩◪◨⬒⬓
71. belter+Vc[view] [source] [discussion] 2023-11-20 09:47:41
>>xapata+P9
Much appreciated. I will conserve energy, and reserve my next outburst until a future Windows Update.
replies(1): >>HenryB+gl
◧◩◪◨
72. bergen+Yc[view] [source] [discussion] 2023-11-20 09:48:10
>>xapata+v7
"Why don't they just buy stock"? Marie Antoinette or something
◧◩◪◨⬒
73. jacque+pd[view] [source] [discussion] 2023-11-20 09:50:43
>>noprom+Aa
Insurance amortizes the risks that large numbers of people are exposed to by pooling a little bit of their resources. This is something else though I'm not quite able to put my finger on why I think it is duplicitous.
replies(1): >>noprom+if
◧◩◪◨⬒
74. jacque+zd[view] [source] [discussion] 2023-11-20 09:51:42
>>imgabe+G8
They do happen to have some effect on the outcome for the rest of us. It's a bit like the captain of a boat that has already taken the first seat in the lifeboat while directing the ship towards the iceberg and saying 'don't worry, we can't possibly sink'.
replies(1): >>thworp+Sj
◧◩◪◨⬒⬓
75. UrineS+Cd[view] [source] [discussion] 2023-11-20 09:52:10
>>slg+Cb
>This isn't a refutation of what I said

It is.

>You asked the AI to commit what some would view as blasphemy

If something is factual then is it more moral to commit blasphemy or lie to the user? Thats what the OP comment was talking about. Could go as far as considering it that it spreads disinformation which has many legal repercussions.

>you simply want it to do it regardless of whether it is potentially immoral or illegal.

So instead it lies to the user instead of saying I cannot answer because some might find the answer offensive that or something to that extent?

replies(1): >>slg+4f
76. causi+Ed[view] [source] 2023-11-20 09:52:22
>>9dev+(OP)
Microsoft can now proceed without the guidance of a council that actually has humanities interests in mind,

This isn't saying yes or no to a supervillain working in a secret volcano lair. This is an arms race. If it's possible for a technology to exist it will exist. The only choice we have is who gets it first. Maybe that means we get destroyed by it first before it destroys everyone else, or maybe it's the reason we don't get destroyed.

replies(1): >>Applej+Xt
◧◩◪◨⬒
77. cycoma+Gd[view] [source] [discussion] 2023-11-20 09:52:43
>>imgabe+G8
Nobody is arguing that they have the intent to cause the apocalypse, but it's more that their actions are certainly making society less stable and they don't see any issue with it. In fact some qre quite openly advocating for such societies.
◧◩◪◨
78. b112+Pd[view] [source] [discussion] 2023-11-20 09:54:04
>>upward+1b
A supervillian can walk by a baby, become overcome with joy, and smile. That doesn't mean his ethics are suddenly correct.

It's almost like you believe Gates is General Butt Naked, where killing babies and eating their brains is all forgiven, because he converted to Christianity, and now helps people.

So?

How does that absolve the faulty ethics of the past?

So please, don't tell me Gates is 'ethical'. What a load of crock!

As for Microsoft, there is no change. Telling me they're carbon neutral is absurd. Carbon credits don't count, and they're doing it to attract clients, and employees... not because they have evolved incredible business ethics.

If they had, their entire desktop experience wouldn't, on a daily basis, fight with you, literally attack you into using their browser. They're literally using the precise same playbook from the turn of the century.

Microsoft takes your money, and then uses you, your desktop, your productivity, as the battleground to fight with competitors. They take your choice away, literally screw you over, instead of providing the absolute best experience you choose, with the product you've bought.

And let's not even get into the pathetic dancing advertisement platform windows is. I swear, we need to legislate this. We need to FORCE all computing platforms to be 100% ad free.

And Microsoft?

They. Are. Evil.

replies(1): >>Frustr+tt
◧◩◪
79. disgru+Ud[view] [source] [discussion] 2023-11-20 09:54:34
>>9dev+66
Yeah but this was caused by the OpenAI board when they fired him. I mean, what did they think was going to happen?

Seems like a textbook case of letting the best be the enemy of the good.

replies(1): >>TheOth+Ug
80. Terrif+9e[view] [source] 2023-11-20 09:55:52
>>9dev+(OP)
> It’s a bit tragic that Ilya and company achieved the exact opposite of what they intended apparently, by driving those they attempted to slow down into the arms of people with more money and less morals. Well.

If they didn’t fire him, Altman will just continue to run hog wild over their charter. In that sense they lose either way.

At least this way, OpenAI can continue to operate independently instead of being Microsoft’s zombie vassal company with their mole Altman pulling the strings.

replies(4): >>abm53+Dh >>stingr+Tk >>pelasa+un >>Frustr+Xq
◧◩◪◨
81. nuance+ke[view] [source] [discussion] 2023-11-20 09:57:18
>>jiggaw+y6
You are right that there are many articles in the open describing nuclear bombs. Still, to actally make them,is another big leap.

Now imagine the AI gets better and better within the next 5 years and is able to provide and explain, in ELI5-style, how to step by step (illegaly) obtain the equipment and materials to do so without getting caught, and provide a detailed recipe. I do not think this is such a stretch. Hence this so called oh-my-gosh limitations nonsense is not so far-fetched.

replies(4): >>Random+Jr >>jiggaw+Ys >>mlrtim+Dy >>suslik+zz
◧◩◪
82. two_in+Fe[view] [source] [discussion] 2023-11-20 09:59:56
>>slg+Y6
Even if it goes to stockholders it's not lost forever. That's how we got Starship. The question is what they do with it. As for 'sharing', we've seen that. In USSR it ended up with Putin, Lukashenko, turkmenbashi, and so on. In others it's not much better. Europe is slowly falling behind. There should be some balance and culture.
replies(3): >>slg+hi >>guappa+Ru >>gremli+Ty
◧◩◪◨⬒
83. ric2b+Ke[view] [source] [discussion] 2023-11-20 10:00:11
>>SpicyL+D6
Sure, they might, but what you see in practice in GPT and being discussed in interviews by Sam is mostly the "AI shouldn't say uncomfortable things" version of AI "safety".
◧◩◪◨⬒⬓
84. easyTh+Ne[view] [source] [discussion] 2023-11-20 10:00:34
>>imgabe+26
It won't be a "Mad Max"-style of apocalypse.

More like "Republic of Weimar" kind of apocalypse, this time with the rich opportunists flying to New Zealand instead of Casablanca or the Austrian Alps.

replies(2): >>cyanyd+0u >>imgabe+FA
◧◩
85. dimask+Pe[view] [source] [discussion] 2023-11-20 10:00:43
>>imgabe+J1
They do not want to exterminate humanity or the ecosystem, but rather profit from the controlled destruction of life, as they try to do out of everything.
◧◩◪
86. imgabe+2f[view] [source] [discussion] 2023-11-20 10:01:40
>>rdtsc+g5
> Will nobody think of the poor shareholders?

Do you have a 401k? Index funds? A pension? You’re probably a Microsoft shareholder too.

◧◩◪◨⬒⬓⬔
87. slg+4f[view] [source] [discussion] 2023-11-20 10:01:44
>>UrineS+Cd
You said GPT refused your request. Refusal to do something is not a lie. These systems aren't capable of lying. They can be wrong, but that isn't the same thing as lying.
◧◩◪
88. I-M-S+af[view] [source] [discussion] 2023-11-20 10:02:06
>>belter+eb
To be fair, corporations as entities have long demonstrated that they are agnostic when it comes to seemingly logical goals such as human self-preservation.
◧◩◪◨⬒⬓
89. noprom+if[view] [source] [discussion] 2023-11-20 10:03:20
>>jacque+pd
Fair point in semantic terms.

Maybe it's risk mitigation without cost sharing to achieve the same economies of scale that insurance creates.

Its a rich man's way of removing risks that we are all exposed to via spending money on things that most couldn't seriously consider due to the likelihood of said risks.

I don't think it's duplicitous. I do resent that I can't afford it. I can't hate on them though. I hate the game, not the players. Some of these guy would prob let folks stay in their bunker. They just can't build a big enough bunker. Also most folks are gross to live with. I'd insist on some basic rules.

I think we innately are suspicious when advantaged folks are planing how they would handle the deaths of the majority of the rest of us. Sorta just... Makes one feel... Less.

replies(1): >>jacque+Ag
◧◩◪◨
90. speede+If[view] [source] [discussion] 2023-11-20 10:05:40
>>upward+1b
You know that MS been forcing people to use Edge 90s style again right? They started it (again?) as soon their monopoly punishment by EU ended.

Or privacy invasion since Win10. Or using their monopoly power to force anti-consumer changes on hardware (such as TPM or Secure Boot).

As for Bill Gates ethical... you talking about that same Bill Gates that got kicked out by his wife because he insisted in being friends with convicted pedophile?

◧◩◪◨⬒⬓
91. wheele+Yf[view] [source] [discussion] 2023-11-20 10:07:15
>>imgabe+26
"Underground bunkers" are actually underground cities. There are a bunch of them all over the world.
replies(1): >>fourth+kC3
◧◩
92. bookaw+4g[view] [source] [discussion] 2023-11-20 10:07:59
>>shubha+m5
This was essentially already in the cards as a possible outcome when Microsoft made it's big investment in OpenAI, so in my view it was a reasonable outcome at this juncture as well. For Microsoft, it's just Nokia in reverse.

If you looked at sama's actions and not his words, he seems intent on maximizing his power, control and prestige (new yorker profile, press blitzes, making a constant effort to rub shoulders with politicians/power players, worldcoin etc). I think getting in bed with Microsoft with the early investment would have allowed sama to entertain the possibility that he could succeed Satya at Microsoft some time in the distant future; that is, in the event that OpenAI never became as big or bigger than Microsoft (his preferred goal presumably) -- and everything else went mostly right for him. After all, he's always going on about how much money is needed for AGI. He wanted more direct access to the money. Now he has it.

Ultimately, this shows how little sama cared for the OpenAI charter to begin with, specifically the part about benefiting all humanity and preventing an unduly concentration of power. He didn’t start his own separate company because the talent was at OpenAI. He wanted to poach the talent, not obey the charter.

Peter Hintjens (ZeroMQ, RIP) wrote a book called "The Psychopath Code", where he posits that psychopaths are attracted to jobs with access to vulnerable people [0]. Selfless talented idealists who do not chase status and prestige can be vulnerable to manipulation. Perhaps that's why Musk pulled out of OpenAI, him and sama were able to recognize the narcissist in each other and put their guard up accordingly. As Altman says, "Elon desperately wants the world to be saved. But only if he can be the one to save it.”[1] Perhaps this apply to him as well.

Amusingly, someone recently posted an old tweet by pg: "The most surprising thing I've learned from being involved with nonprofits is that they are a magnet for sociopaths."[1] As others in the thread noted, if true, it's up for debate whether this applies more to sama or Ilya. Time will tell I guess.

It'll also be interesting to see what assurances were given to sama et al about being exempt from Microsoft's internal red tape. Prior to this, Microsoft had at least a little plausible deniability if OpenAI was ever embroiled in controversy regarding its products. They won't have that luxury with sama's team in-house anymore.

[0] https://hintjens.gitbooks.io/psychopathcode/content/chapter8...

[1] https://archive.is/uUG7H#selection-2071.78-2071.166

[2] >>38339379

◧◩◪◨⬒⬓
93. lucumo+fg[view] [source] [discussion] 2023-11-20 10:08:51
>>slg+Cb
Morals are subjective. Some people care more about the correctness of math than about blaspheming, and for others it's the other way around.

Me, I think forcing morals on others is pretty immoral. Use your morals to restrict your own behaviour all you want, but don't restrict that of other people. Look at religious math or don't. Blaspheme or don't. You do you.

Now, using morals you don't believe in to win an argument on the internet is just pathetic. But you wouldn't do that, would you? You really do believe that asking the AI about a potential math error is blasphemy, right?

replies(1): >>slg+vj
◧◩◪◨
94. imgabe+zg[view] [source] [discussion] 2023-11-20 10:11:20
>>cherry+Gb
Very true. Triangle of Sadness was a good movie kind of about this.

When the shit hits the fan the guy in charge of the bunker is going to be the one who knows how to clean off the fan and get the air filtration system running again.

replies(1): >>cyanyd+Uv
◧◩◪◨⬒⬓⬔
95. jacque+Ag[view] [source] [discussion] 2023-11-20 10:11:21
>>noprom+if
It's duplicitous because it is the likes of Thiel that are messing with the stability of our society in the first place.
replies(1): >>noprom+ni
◧◩◪◨
96. TheOth+Ug[view] [source] [discussion] 2023-11-20 10:13:32
>>disgru+Ud
Perhaps this is why they fired him.

Although IMO MS has consistently been a technological tarpit. Whatever AI comes out of this arrangement will be a thin shadow of what it might have been.

replies(2): >>noprom+Kj >>cyanyd+9r
◧◩◪◨⬒
97. Maken+7h[view] [source] [discussion] 2023-11-20 10:14:42
>>injeol+u9
So, it's ok to use ChapGPT to build nukes as long as you are rich enough to have API access?

That ChatGPT is censored to death is concerning, but I wonder if they really care or they just need a excuse to offer a premium version of their product.

◧◩
98. thworp+uh[view] [source] [discussion] 2023-11-20 10:17:35
>>altacc+f3
Eh, so far they have nothing on the people trying to act out utopias in the 20th century. I will wake up when the billionaire Zeitgeist goes past resource allocation and into "cleanse the undesirables".
replies(2): >>hef198+Mj >>Applej+at
◧◩
99. DebtDe+zh[view] [source] [discussion] 2023-11-20 10:18:06
>>manojl+5b
>And it’s clear that the process and communications around Sam’s removal has been handled very badly, which has seriously damaged our trust.

This is amazing. His very first public statement is to criticize the board that just hired him.

replies(1): >>cutemo+zj
◧◩
100. abm53+Dh[view] [source] [discussion] 2023-11-20 10:18:49
>>Terrif+9e
There is a third option where he stayed, they managed to find a compromise, and in so doing kept their influence in the space to a large extent.
replies(1): >>layer8+yi
◧◩◪
101. cutemo+Gh[view] [source] [discussion] 2023-11-20 10:19:00
>>calf+q8
That doesn't seem to require a non profit owning a for profit though.

Just a "normal" startup could have worked too (but apparently not big corp)

Edit: Hmm sibling comment says sth else, I wonder if that makes sense

replies(1): >>calf+Rv
◧◩◪
102. donny2+Uh[view] [source] [discussion] 2023-11-20 10:20:56
>>croes+Pc
Then “mankind” should be paying for research and servers, shouldn’t it?
replies(4): >>layer8+ej >>elzbar+lj >>croes+xn >>cyanyd+ur
◧◩◪
103. nobody+2i[view] [source] [discussion] 2023-11-20 10:21:32
>>pavlov+H2
Did nothing?
◧◩◪
104. nobody+4i[view] [source] [discussion] 2023-11-20 10:21:45
>>pavlov+H2
Did nothing? What do you mean?
replies(1): >>agsnu+Bi
◧◩◪
105. arthur+6i[view] [source] [discussion] 2023-11-20 10:21:59
>>croes+Pc
Unless "humanity" funds this effort, corporate profits will be the main driving force.
replies(2): >>mrangl+iF >>Zpalmt+1T
◧◩◪◨
106. slg+hi[view] [source] [discussion] 2023-11-20 10:23:17
>>two_in+Fe
>As for 'sharing', we've seen that. In USSR...

HN isn't the place to have the political debate you seem to want to have, so I will simply say that this is really sad that you equate "sharing" with USSR style communism. There is a huge middle ground between that and the trickle-down Reaganomics for which you seem to be advocating. We should have let that type of binary thinking die with the end of the Cold War.

replies(1): >>two_in+9o
◧◩◪◨⬒⬓⬔⧯
107. noprom+ni[view] [source] [discussion] 2023-11-20 10:23:47
>>jacque+Ag
Hmm... True story.

Finger placed on duplicity.

Arguably only some of his time is spent on that kind of instability promoting activity. Most law enforcement agencies agree... Palantir good.

Most reasonable people agree... Funding your own senators and donating tons to Trump and friends... Bad.

Bad Thiel! Stick to wierd seasteading in your spare time if you want to get wierd. No 0 regulation AI floating compute unit seasteading. Only stable seasteading.

All kidding aside, you make a good point. Some of these guys should be a bit more responsible. They don't care what we think though. We're wierd non ceo hamsters who failed to make enough for the New Zealand bunker.

◧◩◪
108. layer8+yi[view] [source] [discussion] 2023-11-20 10:24:58
>>abm53+Dh
I'm pretty sure they tried that before firing him.
replies(1): >>s3p+fk
◧◩◪◨
109. agsnu+Bi[view] [source] [discussion] 2023-11-20 10:25:30
>>nobody+4i
It's worse than did nothing, they actively suppressed climate research. https://en.wikipedia.org/wiki/ExxonMobil_climate_change_deni...
replies(1): >>_heimd+4D
◧◩◪
110. selimt+Vi[view] [source] [discussion] 2023-11-20 10:28:24
>>belter+eb
Well, there was a Shogunworld part to Westworld.
111. vasco+2j[view] [source] 2023-11-20 10:29:03
>>9dev+(OP)
I don't know any of these people's intentions but I definitely have an inate distrust of whoever brands themselves as "a council that actually has humanities interests in mind". Can you get any more populist than that?
replies(2): >>cyanyd+Ur >>FpUser+Fv
◧◩
112. tibbyd+5j[view] [source] [discussion] 2023-11-20 10:29:29
>>upward+2a
Or Colossus/Guardian :).
◧◩◪◨
113. layer8+ej[view] [source] [discussion] 2023-11-20 10:30:42
>>donny2+Uh
Indeed it should.
◧◩◪◨
114. elzbar+lj[view] [source] [discussion] 2023-11-20 10:32:26
>>donny2+Uh
We are. QE and Covid funny money devalued the dollar in exact proportion it gave so much money that even stock buy-backs got old and they started investing in stuff to get rid of those pesky humans and their insolent asking of salaries.
◧◩◪◨⬒⬓⬔
115. slg+vj[view] [source] [discussion] 2023-11-20 10:34:06
>>lucumo+fg
>Use your morals to restrict your own behaviour all you want, but don't restrict that of other people.

That is just a rephrasing of my original reasoning. You want the AI to do what you say regardless of whether what you requested is potentially immoral. This seemingly comes out of the notation that you are a moral person and therefore any request you make is inherently justified as a moral request. But what happens when immoral people use the system?

replies(1): >>lucumo+Ev
◧◩◪
116. cutemo+zj[view] [source] [discussion] 2023-11-20 10:34:31
>>DebtDe+zh
And then:

> I have a three point plan for the next 30 days:

> - Hire an independent investigator to dig into the entire process leading up to this point and generate a full report.

This looks like a CEO a bit different from many others? (in a good way I'm guessing, for the moment)

replies(2): >>mkohlm+Nk >>DebtDe+2l
◧◩◪◨⬒
117. noprom+Kj[view] [source] [discussion] 2023-11-20 10:36:04
>>TheOth+Ug
MSFT is a technological tarpit?

Mate... Just because you don't bat perfect doesn't make you a tarpit.

MSFT is a technological powerhouse. They have absolutely killed it since they were founded. They have defined personal computing for multiple generations and more or less made the word 'software' something spoken occasionally at kitchen tables vs people saying 'soft-what?'

Definitely not a tarpit. You are throwing out whole villages of babies because of some various nasty bathwater over the years.

The picture is bigger. So much crucial tech from MSFT. Remains true today.

replies(1): >>gremli+qz
◧◩◪
118. hef198+Mj[view] [source] [discussion] 2023-11-20 10:36:13
>>thworp+uh
Maybe waking up before that would be wise.
replies(1): >>Frustr+xu
◧◩◪◨⬒⬓
119. thworp+Sj[view] [source] [discussion] 2023-11-20 10:37:01
>>jacque+zd
If you are suggesting that billionaires like Thiel don't have any skin in the game (of human civilization continuing in a somewhat stable way) you're nuts.

If we hit the iceberg they will lose everything. Even if they're able to fly to their NZ hideout, it will already be robbed and occupied. The people that built and stocked their bunker will have formed a gang and confiscated all of his supplies. This is what happens in anarchy.

replies(2): >>cyanyd+Iv >>dang+SU3
◧◩◪◨
120. s3p+fk[view] [source] [discussion] 2023-11-20 10:39:19
>>layer8+yi
Seeing as the vote took place in a haphazard way on the 11th hour during a weekend, I’m not sure they did.
replies(4): >>layer8+vn >>ethanb+to >>cyanyd+Tp >>bart_s+My
◧◩◪◨
121. mkohlm+Nk[view] [source] [discussion] 2023-11-20 10:42:45
>>cutemo+zj
> It's "she" not "he". And then:

With all the love and respect in the world, who do you think you're talking about? Emmet Shear is not trans to my knowledge, (nor, I suspect, his knowledge). If you think this was about Mira Murati, you should really get up to date before telling people off about pronouns.

replies(1): >>cutemo+bp
◧◩
122. stingr+Tk[view] [source] [discussion] 2023-11-20 10:43:44
>>Terrif+9e
How will they be able to continue doing their things without money?

It seems like people forget that it was the investors’ money that made all this possible in the first place.

replies(4): >>jampek+2s >>fevang+aw >>starfa+UC >>Terrif+3L
◧◩◪◨
123. DebtDe+2l[view] [source] [discussion] 2023-11-20 10:45:09
>>cutemo+zj
What? I'm pretty sure Emmett Shear is a he not a she.

Are you perhaps referring to Mira Murati? She only lasted the weekend as interim CEO.

replies(1): >>cutemo+gp
◧◩◪◨⬒⬓⬔
124. HenryB+gl[view] [source] [discussion] 2023-11-20 10:47:09
>>belter+Vc
> ..reserve my next outburst until a..

You'll just waste your time :)

Look, it's Microsoft's right to put any/all effort to making more money with their various practices.

It is our right to buy a Win10 Pro license for X amount of USD, then bolt down the ** out of it with the myriad of privacy tools to protect ourselves and have a "better Win7 Pro OS".

MS has always and will always try to play the game of getting more control, making more money, collecting more telemetry, do clean and dirty things until get caught. Welcome to the human condition. MS employees are humans. MS shareholders are also humans.

As for Windows Update, I don't think I've updated the core version at all since I installed it, and I am using WuMgr and WAU Manager (both portables) for very selective security updates.

It's a game. If you are a former sys-admin or a technical person, then you avoid their traps. If you are not, then the machine will chew your data, just like Google Analytics, AdMod, and so many others do.

Side-note: never update apps when they work 'alright', chances are you will regret it.

replies(2): >>cyanyd+xq >>TeMPOr+Hw
125. didntc+tl[view] [source] 2023-11-20 10:48:48
>>9dev+(OP)
"Morals" in the AI sphere seems to mean "restricting to AI to the esteemed few, for your safety". To me that sounds more likely to lead to cyberpunk corporate dominance than laissez-faire democratization

I realise it's strange to be claiming that a for-profit company is more likely to share AI than a nonprofit with "Open" in their name, yet that is the situation right now

◧◩◪
126. j2bax+Gl[view] [source] [discussion] 2023-11-20 10:50:05
>>croes+Pc
That’s a nice thought but why would this technology be any different than any other? Perhaps OpenAI and Microsoft now compete with each other. Surely they won’t be the only players in the game… Apple, Google won’t just rest on their laurels. Perhaps they will make a better offer at some point to some great minds in AI.
◧◩◪
127. didntc+Zl[view] [source] [discussion] 2023-11-20 10:52:09
>>Athari+vb
I haven't paid a lot of attention to Anthropic. Are you able to summarize, or link anything about, those events for those who missed it? Particularly the "training to lie" bit
replies(1): >>Athari+sA
128. dgb23+qm[view] [source] 2023-11-20 10:55:19
>>9dev+(OP)
Indeed it seems like all of this recent drama and moving around is exactly for this purpose.

Starting as a Non-Profit, naming it "Open" (the implication of the term Open in software is radically different from how they operate) etc. Now seems entirely driven by marketing and fiscal concerns. Feels like a bait and switch almost.

Meanwhile there's a whole strategy around regulatory capture going on, clouded in humanitarian and security concerns which are almost entirely speculative. Again, if we put our cynical hat on or simply follow the money, it seems like the whole narrative around AI safety (etc.) that is perpetuated by these people is FUD (towards law makers) and to inflate what AI actually can to (towards investors).

It's very hard for me right now not to see these actions as part of a machiavellistic strategy that is entirely focused around power, while it adorns itself with ethical concerns.

◧◩◪
129. thworp+Em[view] [source] [discussion] 2023-11-20 10:56:57
>>jampek+L3
As opposed to what? (National) Socialism was for the benefit of the working people on paper, but in practice that meant imprisoning, murdering and impoverishing anybody thought to be working against the people's welfare. Since this included most productive members of society it made everyone poorer anyway.

Human welfare is the domain of politics, not the economic system. The forces that are supposed to inject human welfare into economic decisions are the state through regulation, employees through negotiation and unions and civil society through the press.

replies(1): >>jampek+Ax
◧◩◪◨
130. belter+Xm[view] [source] [discussion] 2023-11-20 10:59:19
>>upward+1b
Bill Gates become ethical? That is on which episode of Star Trek Discovery? The one with the three parallel Universe?

- https://www.nytimes.com/2021/05/16/business/bill-melinda-gat...

- https://www.popularmechanics.com/science/environment/a425435...

◧◩◪◨
131. 9dev+4n[view] [source] [discussion] 2023-11-20 10:59:40
>>imgabe+pa
And I never claimed otherwise. We might be aware of the problems we cause, but that doesn't seem to imply we're able to fix them -- we're still primates after all.
◧◩◪◨
132. didntc+gn[view] [source] [discussion] 2023-11-20 11:01:05
>>slg+l5
I'm not that commenter but I agree with that, or rather "I disagree with OpenAI's prescription of what is and isn't moral". I don't trust some self-appointed organization to determine moral "truth", and who is virtuous enough to use the technology. It would hardly be the first time society's "nobles" have claimed they need to control the plebs access to technology and information "for the good of society"

And as for what I want to do with it, no I don't plan to do anything I consider immoral. Surely that's true of almost everyone's actions almost all the time, almost by definition?

◧◩
133. pelasa+un[view] [source] [discussion] 2023-11-20 11:03:00
>>Terrif+9e
> If they didn’t fire him, Altman will just continue to run hog wild over their charter. In that sense they lose either way.

The story would be much more interesting if actually AI had fired him.

◧◩◪◨⬒
134. layer8+vn[view] [source] [discussion] 2023-11-20 11:03:05
>>s3p+fk
The vote for firing him effectively took place on Thursday at the latest, given that Murati was informed about it that evening.
◧◩◪◨
135. croes+xn[view] [source] [discussion] 2023-11-20 11:03:20
>>donny2+Uh
Mankind already pays for education and infrastructure.

Did OpenAI and others pay for the training data from Stack Overflow, Twitter, Reddit, Github etc. Or any other source produced by mankind?

◧◩◪◨⬒
136. two_in+9o[view] [source] [discussion] 2023-11-20 11:07:29
>>slg+hi
>> There should be some balance

is all I'm saying. And I'm not interested in political debates. Neither right nor left side is good in long run. We have examples. More over we can predict what happens if...

replies(1): >>albume+cB
◧◩◪◨⬒
137. ethanb+to[view] [source] [discussion] 2023-11-20 11:09:27
>>s3p+fk
This has been a source of tension at least since the release of ChatGPT, so… yeah it’s not like the problem came out of nowhere. The governance structure itself is indicative of quite elaborate attempts to reconcile it.
replies(1): >>mbrees+HA
◧◩◪
138. MattHe+yo[view] [source] [discussion] 2023-11-20 11:10:14
>>Keyfra+qb
I've heard the opposite about Brockman. What makes you so confident about this tech abilities?
replies(1): >>Keyfra+Vx
◧◩◪
139. imgabe+Co[view] [source] [discussion] 2023-11-20 11:10:49
>>bspamm+m2
Nobody knows “humanity’s long term interest” with any certainty. Consider that fossil fuels allowed humanity to make massive technological advancements in a relatively short time. Yes, it caused climate change, but perhaps those same technological advancements allow us to fix or adapt to that. Then, in 500 years, another disaster like an asteroid or a solar flare or the Earth’s magnetic poles reversing or whatever happens, and without the boost from fossil fuels we would have been too technologically behind to be able to survive it. What was in humanity’s long term interest then?

I’m not saying that’s definitely the case, but moving slowly when you live in a universe that might hurl a giant rock at you any minute doesn’t seem like a great idea.

◧◩◪
140. avidph+5p[view] [source] [discussion] 2023-11-20 11:13:02
>>9dev+66
> This is precisely the problem OpenAI aimed to solve: This technology cannot be treated independently of the potential risks involved.

I’ve always thought that what OpenAI was purporting to do—-“protect” humanity from bad things that AI could do to it—-was a fool’s errand under a Capitalist system, what with the coercive law of competition and all.

◧◩◪◨⬒
141. cutemo+bp[view] [source] [discussion] 2023-11-20 11:13:58
>>mkohlm+Nk
Edited, I had only heard about Mira Murati, I thought this was the same person.

(I thought also an interim CEO would be there more than a few days, and hadn't stored the name in my mind)

◧◩◪◨⬒
142. cutemo+gp[view] [source] [discussion] 2023-11-20 11:14:39
>>DebtDe+2l
Edited. Yes I had only heard about Mira
◧◩◪
143. upward+rp[view] [source] [discussion] 2023-11-20 11:16:23
>>UrineS+W5
I guess camillomiller is referring to how Facebook & Instagram played a big part in getting people addicted to shallow dopamine hits that consume their time at the cost of less time spent with friends and family face-to-face. Basically, hurting people's social lives in order to make money from ads. Kind of like "digital cigarettes" I suppose.
replies(1): >>camill+OF
144. cyanyd+Lp[view] [source] 2023-11-20 11:18:20
>>9dev+(OP)
I'm going to just sit here waiting for ClippyAI coming 2025
replies(1): >>BlueTe+8D
◧◩◪◨⬒
145. cyanyd+Tp[view] [source] [discussion] 2023-11-20 11:19:13
>>s3p+fk
you can interpret it exactly opposite: they tried to negotiate and he lied .
◧◩◪◨⬒⬓
146. didntc+dq[view] [source] [discussion] 2023-11-20 11:21:44
>>slg+Cb
I'm confused what you're arguing, or what type of refutation you're expecting. We all agree on the facts, that ChatGPT refuses some requests on the ground of one party's morals, and other parties disagree with those morals, so there'll be no refutation there

I mean let's take a step back and speak in general. If someone objects to a rule, then yes, it is likely because they don't consider it wrong to break it. And quite possibly because they have a personal desire to do so. But surely that's openly implied, not a damning revelation?

Since it would be strange to just state a (rather obvious) fact, it appeared/s that you are arguing that the desire to not be constrained by OpenAI's version of morals could only be down to desires that most of us would indeed consider immoral. However your replier offered quite a convincing counterexample. Saying "this doesn't refute [the facts]" seems a bit of a non sequitur

147. saiya-+hq[view] [source] 2023-11-20 11:22:06
>>9dev+(OP)
We're going back to 90s where Microsoft was universally considered evil in IT world. Interesting how world really turns in circles (or more like spirals since world changed a bit, but I wish I would know whether spiral goes up or down).
◧◩
148. ekianj+lq[view] [source] [discussion] 2023-11-20 11:22:18
>>upward+2a
Thats called whitewashing an evil corp with one anecdote. HN deserves better.
◧◩◪◨⬒⬓⬔⧯
149. cyanyd+xq[view] [source] [discussion] 2023-11-20 11:23:20
>>HenryB+gl
it'd be nice if we could enforce monopoly regulations too.
replies(1): >>HenryB+QF
◧◩
150. Frustr+Xq[view] [source] [discussion] 2023-11-20 11:25:46
>>Terrif+9e
Moloch always wins.
replies(2): >>rashth+ns >>mister+ot
◧◩◪◨⬒
151. cyanyd+9r[view] [source] [discussion] 2023-11-20 11:27:08
>>TheOth+Ug
ClippyAI coming2025: I see you're trying to invade a third world nation, can I help you with that?
152. dareob+tr[view] [source] 2023-11-20 11:28:19
>>9dev+(OP)
The idea that OpenAI people whose focus is building an AGI that can replace humans in every viable human activity will create a more ethical outcome than Microsoft whose focus is using AI to empower workers to do more sounds extremely unlikely.

People have gotten into their heads that researchers are good and corporations are bad in every case which is simply not true. OpenAI's mission is worse for humanity than Microsoft's.

replies(2): >>jampek+zs >>jprete+Ay
◧◩◪◨
153. cyanyd+ur[view] [source] [discussion] 2023-11-20 11:28:29
>>donny2+Uh
... that's how government works.

name a utopian fiction that has corporations as benefactors to humanity

replies(1): >>donny2+Gu
◧◩◪◨⬒
154. Random+Jr[view] [source] [discussion] 2023-11-20 11:30:02
>>nuance+ke
It is a massive stretch given how well the materials are policed or how much effort is required to make them. There is no reason to assume that there is some magic shortcut that AI will discover.
◧◩
155. cyanyd+Ur[view] [source] [discussion] 2023-11-20 11:31:02
>>vasco+2j
considering how an AI is built, it's ironic that you're skeptical of popular thinking.
◧◩◪
156. jampek+2s[view] [source] [discussion] 2023-11-20 11:32:04
>>stingr+Tk
Developing new algorithms and methods doesn't necessarily, or even typically, take billions.
replies(1): >>sebzim+Gs
◧◩◪◨⬒
157. jiggaw+js[view] [source] [discussion] 2023-11-20 11:34:39
>>injeol+u9
I use it via Azure Open AI service which was uncensored... for a while.

Now you have to apply in writing to Microsoft with a justification for having access to an uncensored API.

◧◩◪
158. rashth+ns[view] [source] [discussion] 2023-11-20 11:34:59
>>Frustr+Xq
LOL
◧◩
159. jampek+zs[view] [source] [discussion] 2023-11-20 11:35:56
>>dareob+tr
Corporations literally are maximizing profit. Researchers at least can have other motives.

If Microsoft came up with a way of making trillion dollars in profit by enslaving half the planet, it kinda has to do it.

replies(2): >>joenot+7v >>FpUser+lv
◧◩◪◨
160. sebzim+Gs[view] [source] [discussion] 2023-11-20 11:36:37
>>jampek+2s
Yeah but testing if they work does, that's the problem.

There are probably load so ways you can make language models with 100M parameters more efficient, but most of them won't scale to models with 100B parameters.

IIRC there is a bit of a phase transition that happens around 7B parameters where the distribution of activations changes qualitatively.

Anthropic have interpretability papers where their method does not work for 'small' models (with ~5B parameters) but works great for models with >50B parameters.

replies(1): >>kvetch+WC
◧◩◪
161. cyanyd+Vs[view] [source] [discussion] 2023-11-20 11:37:28
>>altacc+L2
the rabbit hole is infinite and everyone is capable of chasing into it without regard for anyone else.
◧◩◪◨⬒
162. jiggaw+Ys[view] [source] [discussion] 2023-11-20 11:37:56
>>nuance+ke
That you think that there's like a handful of clever tricks that an AI can bestow upon some child and ta-da they can build a nuclear bomb in their basement is hilarious.

What an AI would almost certainly tell you is that building an atomic bomb is no joke, even if you have access to a nuclear reactor, have the budget of a nation-state, and can direct an entire team of trained nuclear physicists to work on the project for years.

Next thing you'll be concerned about toddlers launching lasers into orbit and dominating the Earth from space.

replies(1): >>nuance+rF
◧◩◪
163. Applej+at[view] [source] [discussion] 2023-11-20 11:39:17
>>thworp+uh
Which billionaire? Looks like you're awake enough to type, and rightly so, as we're way past that point, and it's obvious even to ordinary people.

Interesting to note how much of this is driven by individual billionaire humans being hung up on stuff like ketamine. I'm given to understand numerous high-ranking Nazis were hung up on amphetamines. Humans like to try and make themselves into deities, by basically hitting themselves in the brain with rocks.

Doesn't end well.

◧◩◪
164. dalore+ct[view] [source] [discussion] 2023-11-20 11:39:28
>>Athari+vb
Anthropic safety is overboard. I tried the classic question of "how many holes does a straw have?" And it refused to talk about the topic. I'm assuming because it thought holes was sexual.
replies(3): >>JBiser+Gv >>visarg+jA >>PH95Vu+Hh1
◧◩◪
165. RcouF1+dt[view] [source] [discussion] 2023-11-20 11:39:32
>>slg+Y6
> It is the fear that a greater portion of the value of this technology will go to the stockholders of said companies rather than potentially be shared among a larger percentage of society. Not that I had that much faith in OpenAI, but in general the shift from non-profit to for-profit is a win for the few over the many.

You know what is an even bigger temptation to people than money - power. And being a high priest for some “god” controlling access from the unwashed masses who might use it for “bad” is a really heady dose of power.

This safety argument was used to justify monarchy, illiteracy, religious coercion.

There is a much greater chance of AI getting locked away from normal people by a non-profit on a power trip, rather than by a corporation looking to maximize profit.

replies(2): >>bnralt+sz >>slg+wo1
◧◩◪
166. mister+ot[view] [source] [discussion] 2023-11-20 11:40:05
>>Frustr+Xq
Mostly. But Elua is still here, and the game isn't over yet.
◧◩◪◨⬒
167. Frustr+tt[view] [source] [discussion] 2023-11-20 11:40:43
>>b112+Pd
Agree, but HN likes to hate on MS so much that it becomes a little blinding to others.

Really, all corporations are evil, and they are all made of humans that look the other way, because everyone needs that pay check to eat.

And on the sliding scale of evil, there are a lot of more evil. Like BP, pharma co, Union Carbide. etc... etc...

◧◩◪◨⬒
168. cyanyd+xt[view] [source] [discussion] 2023-11-20 11:40:53
>>discre+ea
in Steve jobs case, hlel didn't want to admit he's a moron who knew nothing about fruit, nutrition and cancer

the problem with eugenics isn't that we can't control population land genetic expression, it's that genetic expression is a fractal landscape that's not predictable from human stated goals.

the ethics of doing things "because you meant well" is well established as, not enough.

◧◩
169. Applej+Xt[view] [source] [discussion] 2023-11-20 11:43:48
>>causi+Ed
The assumption that it is an arms race is a shockingly anthropocentric view of something that's supposed to be 'intelligence' but is just a distillation of collected HUMAN opinion.

Not only that, it's a blindered take on what human opinion is. Humans are killer apes AND cooperative, practically eusocial apes. Failing to understand both horns of that dilemma is a serious mistake.

replies(2): >>Frustr+Iu >>causi+hD
◧◩◪◨⬒⬓⬔
170. cyanyd+0u[view] [source] [discussion] 2023-11-20 11:44:07
>>easyTh+Ne
and they won't have any better at it.

the people wholl be in power then will still resemble the basics: violence, means of production and more violence.

which they know and are basically planning dystopian police states.

replies(1): >>easyTh+Iw
◧◩◪◨⬒⬓⬔⧯
171. cyanyd+hu[view] [source] [discussion] 2023-11-20 11:45:52
>>imgabe+Bb
Elon musk is publically think in a way that no one with 10000 years of history would think.

unfortunately, people are flawed.

◧◩◪◨⬒
172. cyanyd+ru[view] [source] [discussion] 2023-11-20 11:47:00
>>noprom+Aa
unfortunately, you could also just be a Buddhist and reject material notions.

see, what exactly is insurance at the billionaires level.

replies(1): >>noprom+Az
◧◩◪◨
173. Frustr+xu[view] [source] [discussion] 2023-11-20 11:47:37
>>hef198+Mj
I think a lot of people have hit the snooze button 2 or 3 times at this point.

Rolling over, covering head with blanket. 'Surely the dystopian future, rich cleansing the world, is still a few decades away, just need a little more sleepy time'.

◧◩◪◨⬒
174. cyanyd+yu[view] [source] [discussion] 2023-11-20 11:47:37
>>imgabe+G8
self fulfilling prophecies are real.
◧◩◪◨⬒
175. donny2+Gu[view] [source] [discussion] 2023-11-20 11:49:12
>>cyanyd+ur
I mean, we already benefit plenty in various ways from corporations like Google.

AI is just another product by another corporation. If I get to benefit from the technology while the company that offers it also makes profit, that’s fine, I think? There wasn’t publicly available AI until someone decided to sell it.

replies(1): >>croes+Jz
◧◩◪
176. Frustr+Iu[view] [source] [discussion] 2023-11-20 11:49:19
>>Applej+Xt
All technology is an arms race. People are hung up on OpenAI, OpenAI is just one of hundreds of AI companies. Military AI in drones is already to point where AI can fly an F-16 and beat humans.
◧◩◪◨
177. guappa+Ru[view] [source] [discussion] 2023-11-20 11:50:05
>>two_in+Fe
> That's how we got Starship

You forget massive public investment?

◧◩◪
178. Applej+Wu[view] [source] [discussion] 2023-11-20 11:51:01
>>9dev+e2
The really interesting question is whether AI, provided with superhuman inference, is better at this than humans. All the most powerful humans remain relentlessly human, and sometimes show it to tragic and/or laughable effect.

To some extent human societies viewed as eusocial organisms are better at this than individual humans. And rightly so, because human follies can have catastrophic effects on the society/organism.

◧◩◪
179. joenot+7v[view] [source] [discussion] 2023-11-20 11:51:55
>>jampek+zs
This is a pretty simplistic and uneducated view on how big companies actually function.
replies(3): >>jampek+yw >>staunt+BC >>wredue+yh1
◧◩◪
180. FpUser+lv[view] [source] [discussion] 2023-11-20 11:53:06
>>jampek+zs
>"Researchers at least can have other motives."

I know about a man who had turned country upside down while "having people's best interests" in mind.

replies(1): >>jampek+2D
◧◩◪◨⬒⬓⬔⧯
181. lucumo+Ev[view] [source] [discussion] 2023-11-20 11:55:20
>>slg+vj
> This seemingly comes out of the notation that you are a moral person

No.

It comes from the notion that YOU don't get to decide what MY morals should be. Nor do I get to decide what yours should be.

> But what happens when immoral people use the system?

Then the things happen that they want to happen. So what? Blasphemy or bad math is none of your business. Get out of people's lives.

◧◩
182. FpUser+Fv[view] [source] [discussion] 2023-11-20 11:55:26
>>vasco+2j
>"a council that actually has humanities interests in mind".

Please fuckin don't. I do not want yet another entity to tell me how to live my life.

replies(1): >>9dev+Wy
◧◩◪◨
183. JBiser+Gv[view] [source] [discussion] 2023-11-20 11:55:27
>>dalore+ct
Given what AIs "know" about humanity, I think it's safe to assume that they "think" every word is sexual. For example straw could be short for strawman, which is a man, which is sexual. Or it can be innuendo for... you know.

As for your actual question, it seems to me that a straw is topologically equivalent to a torus, so it has 1 hole, right?

replies(1): >>TeMPOr+RF
◧◩◪◨⬒⬓⬔
184. cyanyd+Iv[view] [source] [discussion] 2023-11-20 11:55:36
>>thworp+Sj
you're assuming they're not determinists.

people like Steve jobs are the best example of flawed logic. in the face of a completely different set of heuristic and logical information, he assumed he was just as capable, and chose fruit smoothies over more efficacious and proven medication.

they absolutely, like jobs, are playing a game they think they fully understand and absolutely are likely to chose medicine akin to jobs

just watch Elon and everything he's choosing to do.

these people are all normal but society has given the a deadly amount of leverage without any specific training.

replies(1): >>Solven+ge1
◧◩◪◨
185. calf+Rv[view] [source] [discussion] 2023-11-20 11:56:48
>>cutemo+Gh
A Normal startup may not appeal to academics who aren't in it for the money but who want to pioneer AGI research.
◧◩◪◨⬒
186. cyanyd+Uv[view] [source] [discussion] 2023-11-20 11:57:08
>>imgabe+zg
.. or the guy willing to violence. shorter movie but equally probable.
replies(1): >>imgabe+Ww
◧◩◪
187. fevang+aw[view] [source] [discussion] 2023-11-20 11:59:53
>>stingr+Tk
100M users perhaps?
replies(1): >>stingr+Zx
◧◩◪
188. cyanyd+jw[view] [source] [discussion] 2023-11-20 12:01:02
>>ssnist+0b
especially when corporate governance is basically just a stripped down social government. almost all dystopian fiction shows that they're nothing more that authority without representation to the greater good.

sure, we should have competitive bodies seeking better means to ends but ultimately there's always going to be a structure to hold them accountable.

people have a lot of faith that money is the best fitness function for humanity.

◧◩◪◨
189. jampek+yw[view] [source] [discussion] 2023-11-20 12:03:19
>>joenot+7v
Individual companies of course can and do do all kinds of things that may not be most profitable, but in the long run it's survival of the most profitable. Those get the most capital and thus have the most power of towards which goals resources are allocated.

Also companies, especially public companies, are typically mandated by law to prioritize profit.

replies(2): >>Eggpan+pb1 >>Eggpan+vb1
◧◩◪◨⬒⬓⬔⧯
190. TeMPOr+Hw[view] [source] [discussion] 2023-11-20 12:04:12
>>HenryB+gl
It's a game, of the kind where the winning move is not to play. Except we're being forced to. Human condition is in many ways fucked.
◧◩◪◨⬒⬓⬔⧯
191. easyTh+Iw[view] [source] [discussion] 2023-11-20 12:04:15
>>cyanyd+0u
Because given the historical precedents, they know they will probably die peacefully in their beds before they have to pay any real consequences for their actions. Sure, a few dictators at the very end of their reign had to pay some consequences, but their cohorts? Soviet Russia, South America Banana republics, the aristocratic european families that enabled fascism and nazism...

Probably a few CEOs great grand-childs will probably have to write how they're very very sad that their long forgotten relatives have destroyed most of the planet, and how they're just so lucky to be among the few that are still living a luxurious life somewhere in the Solomon Islands.

192. dalbas+Pw[view] [source] 2023-11-20 12:04:59
>>9dev+(OP)
IDK. Let's proceed with caution in gauging intentions and interests. Altamans', Microsoft's, the Jedi council's.

"Humanity's interest at heart" is a mouthful. I'm not denigrating it. I think it is really important.

That said, as a proverbial human... I am not hanging my hat on that charter. Members of the consortium all also claim to be serving the common good in their other ventures. So do Exxon.

OpenAI haven't created, or even articulated a coherent, legible, and believable model for enshrining humanity's interests. The corporate structure flowchart of nonprofit, LLCs, and such.. it is not anywhere near sufficient.

OpenAI in no way belongs to humanity. Not rhetorically, legally or in practice... currently.

I'm all for efforts to prevent these new technologies from being stolen from humanity, controlled monopolistically... From moderate to radical ideas, I'm all ears.

What happened to the human consortium that was the worldwideWeb, gnu, and descendant projects like Wikipedia... That was moral theft, imo. I am for any effort to avoid a repeat. OpenAI is not such an effort, as far as I can tell.

If it is, it's not too late. Open AI haven't betrayed the generous reading of the mission in charter. They just haven't taken hard steps to achieving it. Instead, they have left things open, and I think the more realistic take is the default one.

replies(1): >>dorfsm+DV
◧◩◪◨⬒⬓
193. imgabe+Ww[view] [source] [discussion] 2023-11-20 12:06:16
>>cyanyd+Uv
Violence can only get you so far. Sure, maybe the guy who knows how to get food will get you some food if you threaten to kill him. But if he refuses, and you do kill him, then what? You still don't know how to get food for yourself.
replies(1): >>cyanyd+1y
◧◩◪
194. mlrtim+ex[view] [source] [discussion] 2023-11-20 12:07:49
>>9dev+e2
>Humans are really bad in assessing situations larger than their immediate family

Agreed, and we're also bad at being told what to do. Especially when someone says they know better than us.

What we are extremely good at is adaptation and technological advancement. Since we know this already , why do we try to stop or slow progress.

replies(1): >>9dev+8y
195. denton+rx[view] [source] 2023-11-20 12:09:08
>>9dev+(OP)
> a council that actually has humanities interests in mind

It's interesting that "Effective Altruism" enthusiasts all seem to be mega-rich grifters.

replies(1): >>staunt+9D
◧◩◪◨
196. jampek+Ax[view] [source] [discussion] 2023-11-20 12:10:18
>>thworp+Em
In this case as opposed to e.g. a non-profit?

What you describe is indeed the liberal (as in liberalism) ideal of how societies should be structured. But what is supposed to happen is necessarily not what actually happens.

The state should be controlled by the population through democracy, but few would claim with a straight face that the economic power doesn't influence the state.

◧◩◪◨
197. Keyfra+Vx[view] [source] [discussion] 2023-11-20 12:12:48
>>MattHe+yo
There are interviews with all three at Dwarkesh Patel youtube channel. One is definitely not like the other two, but that might just be my impression based on those interviews. edit: Brockman might've been on Lex only.
◧◩◪◨
198. stingr+Zx[view] [source] [discussion] 2023-11-20 12:13:27
>>fevang+aw
But as I understand it they’re still losing money, as much as $0.30 on every ChatGPT query.
replies(1): >>johnsi+oA
◧◩◪◨⬒⬓⬔
199. cyanyd+1y[view] [source] [discussion] 2023-11-20 12:13:40
>>imgabe+Ww
people in the violence frame aren't doing the long term thing. but we absolutely know they exist and in no scenario can you be assured they're not in that position.

it's gambling, pure and simple.

replies(1): >>imgabe+Vz
◧◩◪◨
200. 9dev+8y[view] [source] [discussion] 2023-11-20 12:15:09
>>mlrtim+ex
That is no reason to throw all ethic considerations over board. We have ethics panels on scientific studies for a very good reason, unless you want to let Dr. Mengele and his friends decide on progress.

It is a good thing that society has mechanisms to at least try and control the rate of progress.

replies(1): >>mlrtim+tI
◧◩◪
201. mlrtim+my[view] [source] [discussion] 2023-11-20 12:16:42
>>9dev+t3
Watching this unfold, I'm unsure armchair experts on HN would have executed this WORSE than the board did.
◧◩
202. jprete+Ay[view] [source] [discussion] 2023-11-20 12:18:26
>>dareob+tr
I agree that OpenAI’s mission is probabky bad for humanity. But Microsoft is not a company that would hesitate at replacing a billion people permanently with AI.
◧◩◪◨⬒
203. mlrtim+Dy[view] [source] [discussion] 2023-11-20 12:18:57
>>nuance+ke
Now imagine the AI gets better and better within the next 5 years and is able to provide and explain, in ELI5-style, how to step by step ... create a system to catch the people trying to do the above.

Gotcha! We can both come up with absurd examples.

◧◩◪◨⬒
204. bart_s+My[view] [source] [discussion] 2023-11-20 12:19:53
>>s3p+fk
You are assuming there was absolutely no build up to the firing. Just because the disagreements weren’t public doesn’t mean they weren’t happening.
◧◩◪◨
205. gremli+Ty[view] [source] [discussion] 2023-11-20 12:20:52
>>two_in+Fe
Except the USSR 'ended up' with those people because they went towards Western-style capitalism, these werent Soviet nomenklatura who stole power by abusing Soviet bureacracy, these were post-Soviet, American-style "democratic" leaders.
replies(1): >>two_in+xv1
◧◩◪
206. 9dev+Wy[view] [source] [discussion] 2023-11-20 12:21:12
>>FpUser+Fv
Oh, cut it. We're talking about a non-profit organisation that wants to keep the pace of scientific progress on AGI research slow enough to make time for society to gauge the ethical implications on an actual AGI, should it emerge.

Nobody is telling you how to live your life, unless your life's goal is to erect Skynet.

replies(1): >>FpUser+VR
◧◩◪◨⬒⬓
207. gremli+qz[view] [source] [discussion] 2023-11-20 12:24:22
>>noprom+Kj
"Innovation" through anti-trust isn't "killing it".
replies(1): >>noprom+CA
◧◩◪◨
208. bnralt+sz[view] [source] [discussion] 2023-11-20 12:24:34
>>RcouF1+dt
Right. Greenpeace also protects the world against technological threats only they can see, and in that capacity has worked to stop nuclear power and GMO use. Acting as if all concern about technology is noble is extremely misguided. There's a lot of excessive concern about technology that holds society back.

If we use the standard of the alignment folks - that the technology today doesn't even have to be the danger, but an imaginary technology that could someday be built might be the danger. And we don't even have to be able to articulate clearly how it's a danger, we can just postulate the possibility. Then all technology becomes suspect, and needs a priest class to decided what access the population can have for fear of risking doomsday.

◧◩◪◨⬒
209. suslik+zz[view] [source] [discussion] 2023-11-20 12:25:36
>>nuance+ke
How is that a good reason for GPT4 not being able to write the word 'fuck'? You might handwave the patronising attitude of OpenAI strategy, but with many of ust they did lost most of their good faith by trying to make their model 'safe' to a horny 10-year-old.
replies(1): >>fragme+qG
◧◩◪◨⬒⬓
210. noprom+Az[view] [source] [discussion] 2023-11-20 12:25:38
>>cyanyd+ru
Uhhh...

Buddhists die in the Armageddon same as others.

The bunkers are in new Zealand which is an island and less likely to fall into chaos with the rest of the world in event of ww3 and/or moderate nuclear events.

I'm sure the bunkers are nice. Material notions got little to do with it. The bunker isn't filled with Ferraris. They are filled with food, a few copies of the internet and probably wierd sperms banks or who knows what for repopulating the earth with Altman's and Theils.

replies(1): >>cyanyd+GD1
◧◩◪◨⬒⬓
211. croes+Jz[view] [source] [discussion] 2023-11-20 12:26:29
>>donny2+Gu
And corporations already benefited plenty from infrastructure, education and stability provided by governments.

>If I get to benefit from the technology while the company that offers it also makes profit, that's fine.

What if you don't benefit because you lose your job to AI or have to deal with the mess created by real looking disinformation created by AI?

Is was already bad with fake images out of ARMA but with AI we get a whole new level of fakes.

◧◩◪◨⬒⬓⬔⧯
212. imgabe+Vz[view] [source] [discussion] 2023-11-20 12:27:48
>>cyanyd+1y
Yeah, people are smart though. Like if you’re good at getting food you find the person who’s best at violence and promise to get them plenty of food if they protect you from the other violent people. Maybe you divide up the work among the good at food getting people and the good at violence people and pretty soon you got yourself a little society going.
replies(1): >>cyanyd+NN
◧◩◪◨⬒
213. ikt+7A[view] [source] [discussion] 2023-11-20 12:29:02
>>belter+W8
> Or being forced to use Teams and Azure, due to my company CEO getting the licenses for free out of his Excel spend? :-))

The pain is real :(

"You use Windows because it is the only OS you know. I use Windows because it is the only OS you know."

◧◩◪◨
214. visarg+jA[view] [source] [discussion] 2023-11-20 12:29:53
>>dalore+ct
When did you last try that? I checked right now and it says

> A straw has one hole that runs through its entire length.

replies(1): >>dalore+3I4
◧◩◪◨⬒
215. johnsi+oA[view] [source] [discussion] 2023-11-20 12:30:34
>>stingr+Zx
Not true

Sama on X said as of late 2022 they were single digit pennies per query and dropping

replies(3): >>mbrees+5B >>hef198+EB >>m-p-3+9S
◧◩◪◨
216. Athari+sA[view] [source] [discussion] 2023-11-20 12:30:47
>>didntc+Zl
David Shapiro complained about Anthropic's approach to alignment. In his video https://www.youtube.com/watch?v=PgwpqjiKkoY he discusses ableism, moralism, lying.

As to cat-and-mouse with jailbreakers, I don't remember any thorough articles or videos. It's mostly based on discussions on LLM forums. Claude is widely regarded as one of the best models for NSFW roleplay, which completely invalidates Antropic's claims about safety and alignment being "solved."

◧◩◪◨⬒⬓⬔
217. noprom+CA[view] [source] [discussion] 2023-11-20 12:32:46
>>gremli+qz
Uhhh they won that appeal BTW.. If you are referring to the trouble with Janet Reno.

Gates keeps repeating. Noone hears it.

◧◩
218. denton+DA[view] [source] [discussion] 2023-11-20 12:32:48
>>manojl+5b
> Our partnership with Microsoft remains strong

Did he say that before or after Microsoft announced they'd hired Altman and Brockman, and poached a lot of OpenAI's top researchers?

◧◩◪◨⬒⬓⬔
219. imgabe+FA[view] [source] [discussion] 2023-11-20 12:32:54
>>easyTh+Ne
Hey if it’s a Weimar style apocalypse we’ll all be billionaires.
◧◩◪◨⬒⬓
220. mbrees+HA[view] [source] [discussion] 2023-11-20 12:33:09
>>ethanb+to
I don’t know about that. Yes, there was tension built into the structure, something happened to trigger this. You don’t fire your CEO without a backup plan if this was an on going conflict. And if your backup plan is to keep the current president (who was the chair of the board until you removed him), that’s not a backup plan.

Everything points to this being a haphazard change that’s clumsy at best.

replies(1): >>ethanb+tB
◧◩◪◨⬒⬓
221. mbrees+5B[view] [source] [discussion] 2023-11-20 12:35:28
>>johnsi+oA
New models might have different economics…
◧◩◪◨⬒⬓
222. albume+cB[view] [source] [discussion] 2023-11-20 12:36:27
>>two_in+9o
Not interested in political debates, but you make political statements drawn from the extremes to support your arguments. Gotcha.

"Europe is falling behind" very much depends on your metrics. I guess on HN it's technological innovation, but for most people the metric would be quality of life, happiness, liveability etc. and Europe's left-leaning approach is doing very nicely in that regard; better than the US.

◧◩◪◨⬒⬓⬔
223. ethanb+tB[view] [source] [discussion] 2023-11-20 12:38:04
>>mbrees+HA
The question was “did they try to find compromise” not “was the firing haphazard.” The answer is definitely yes to the former.
◧◩◪◨⬒⬓
224. hef198+EB[view] [source] [discussion] 2023-11-20 12:38:47
>>johnsi+oA
The only financial statements I believe are those signed of by external auditors. And even there my trust only goes that far.
replies(1): >>insani+dD
225. sander+1C[view] [source] 2023-11-20 12:41:32
>>9dev+(OP)
These concerns are in the hands of voters and their representatives in governments now, and really, they always were. A single private organization was never going to be able to solve the coordination problem of balancing progress in a technology against its impact on society.

Indeed, I think trying to do it that way increases the risk that the single private organization captures its regulators and ends up without effective oversight. To put it bluntly: I think it's going to be easier, politically, to regulate this technology with it being a battle between Microsoft, Meta, and Google all focused on commercial applications, than with the clearly dominant organization being a nonprofit that is supposedly altruistic and self-regulating.

I have sympathy for people who think that all sounds like a bad outcome because they are skeptical of politics and trust the big brains at OpenAI more. But personally I think governments have the ultimate responsibility to look out for the interests of the societies they govern.

replies(1): >>gwd+jE
◧◩
226. _heimd+iC[view] [source] [discussion] 2023-11-20 12:42:59
>>imgabe+J1
That assumption hasn't worked with the cigarette, oil, or pharmaceutical industries. Why would it work here?

It doesn't take a cartoon supervillain to keep selling cigarettes like candy even though you know they increase cancer risks. Or for oil companies to keep producing oil and burying alternative energy sources. Or for the Sacklers to give us Oxy.

◧◩◪◨
227. _heimd+yC[view] [source] [discussion] 2023-11-20 12:44:41
>>nxm+w8
That economic growth wasn't an absolute necessity that had to be powered, it was a choice based on the assumption that creating new stuff is always a positive and that we have functionally limitless natural resources that we should use before someone else does.
◧◩◪◨
228. staunt+BC[view] [source] [discussion] 2023-11-20 12:45:01
>>joenot+7v
What's the educated view?
replies(1): >>insani+JD
◧◩◪
229. starfa+UC[view] [source] [discussion] 2023-11-20 12:47:12
>>stingr+Tk
Now that OpenAI is the leader in the field, it has a lot of monetisation avenues above and over the existing income streams of parterships, ChatGPT+ and API access.
◧◩◪◨⬒
230. kvetch+WC[view] [source] [discussion] 2023-11-20 12:47:18
>>sebzim+Gs
Deep NN aren't the only path to AGI... They actually could be one of the worst paths

For Example, check out the proceedings of the AGI Conference that's been going on for 16 years. https://www.agi-conference.org/

I have faith that Ilya. He's not going to allow this blunder to define his reputation.

He's going to go all in on research to find something to replace Transformers, leaving everyone else in the dust.

◧◩◪◨
231. jampek+2D[view] [source] [discussion] 2023-11-20 12:48:17
>>FpUser+lv
I know about many companies that have turned countries, and even continents, upside down while having shareholders' profit in mind.
replies(1): >>FpUser+yR
◧◩◪◨⬒
232. _heimd+4D[view] [source] [discussion] 2023-11-20 12:48:20
>>agsnu+Bi
I interned at Exxon during the gulf oil spill and saw two interesting actions play out while there.

Exxon was responsible for the oil spill response that coagulated the oil and sank it. They were surprisingly proud of this, having recommended it to BP so that the extent of leaked oil was less noticeable from the surface.

Exxon also invested heavily in an alternative energy company doing research to create oil from a certain type of algae. The investment was all a PR stunt that gave them enough leverage to shelve the research that was successful enough to be considered a threat.

◧◩
233. BlueTe+8D[view] [source] [discussion] 2023-11-20 12:48:38
>>cyanyd+Lp
This music video would have been prophetic, huh ?

Delta Heavy - Ghost (Official Video)

https://www.youtube.com/watch?v=b4taIpALfAo

◧◩
234. staunt+9D[view] [source] [discussion] 2023-11-20 12:48:53
>>denton+rx
I bet you can only name one...
replies(1): >>denton+oY
◧◩◪◨⬒⬓⬔
235. insani+dD[view] [source] [discussion] 2023-11-20 12:49:12
>>hef198+EB
Pretty sure that it would be illegal for them to tweet insider information like that if it were false, since it's effectively a statement to shareholders.
replies(1): >>hef198+tD
◧◩◪
236. causi+hD[view] [source] [discussion] 2023-11-20 12:49:29
>>Applej+Xt
What's the difference for the world between the Altman and the Sutskever approach for OpenAI? With Altman the bad stuff happens at OpenAI and everyone gets it at the same time. With Sutskever, the bad stuff happens two years later but it happens in random pockets all over the world and nobody can be quite sure what they're facing.
◧◩◪◨⬒⬓⬔⧯
237. hef198+tD[view] [source] [discussion] 2023-11-20 12:50:23
>>insani+dD
I'll take securities fraud for 420, please, but private.
replies(1): >>insani+VD
238. mrangl+DD[view] [source] 2023-11-20 12:51:08
>>9dev+(OP)
Who says that the OpenAi Board has humanity's interests in mind? Copy and reality are often different. It's more likely that said Board feels most of its pressure from the Press, which is for-profit and often has partisan agendas that are detached from humanity's interests. Whereas profit motive traditionally does pesky things like incentivizes company response to the market (humanity) and keeps them from doing braindead things like freeing up their talent to be scooped up by "megacorp" because either a. ego or b. pressure from outside forces with their own agendas.
◧◩◪◨⬒
239. insani+JD[view] [source] [discussion] 2023-11-20 12:51:45
>>staunt+BC
No one I've ever had as an investor would be OK with me enslaving the planet for 1 Trillion dollars...

You're talking about investors and shareholders like they're just machines that only ever prioritize profit. That's just obviously not true.

replies(1): >>jampek+RH
◧◩◪◨⬒⬓⬔⧯▣
240. insani+VD[view] [source] [discussion] 2023-11-20 12:52:50
>>hef198+tD
That's exactly the point - by tweeting insider information you are making a public statement. We've learned this very recently...
replies(1): >>hashha+o91
◧◩
241. gwd+jE[view] [source] [discussion] 2023-11-20 12:54:29
>>sander+1C
> These concerns are in the hands of voters and their representatives in governments now, and really, they always were. A single private organization was never going to be able to solve the coordination problem of balancing progress in a technology against its impact on society.

Um, have you heard of lead additives to gasoline? CFCs? Asbestos? Smoking? History is littered with complete failures of governments to appropriately regulate new technology in the face of an economic incentive to ignore or minimize "externalities" and long-term risk for short-term gain.

The idea of having a non-profit, with an explicit mandate to use to pursue the benefit of all mankind, be the first one to achieve the next levels of technology was at least worth a shot. OpenAI's existence doesn't stop other companies from pursuing technology, nor does it prevent governments doing coordination. But it at least gives a chance that a potentially dangerous technology will go in the right direction.

replies(2): >>johann+B21 >>sander+pS1
242. 383210+VE[view] [source] 2023-11-20 12:58:43
>>9dev+(OP)
> It’s a bit tragic that Ilya and company ...

There must be an Aesop’s fable that sheds light on the “tragedy”.

https://www.goodreads.com/quotes/923989-if-you-choose-bad-co...

Or maybe this one? (Ape seems to map to Microsoft, or possibly a hat tip to Balmer ..)

The fable is of the Two Travellers and the Apes.

Two men, one who always spoke the truth and the other who told nothing but lies, were traveling together and by chance came to the land of Apes. One of the Apes, who had raised himself to be king, commanded them to be seized and brought before him, that he might know what was said of him among men. He ordered at the same time that all the Apes be arranged in a long row on his right hand and on his left, and that a throne be placed for him, as was the custom among men.

After these preparations, he signified that the two men should be brought before him, and greeted them with this salutation: “What sort of a king do I seem to you to be, O strangers?’ The Lying Traveller replied, “You seem to me a most mighty king.” “And what is your estimate of those you see around me?’ “These,” he made answer, “are worthy companions of yourself, fit at least to be ambassadors and leaders of armies”. The Ape and all his court, gratified with the lie, commanded that a handsome present be given to the flatterer.

On this the truthful Traveller thought to himself, “If so great a reward be given for a lie, with what gift may not I be rewarded if, according to my custom, I tell the truth?’ The Ape quickly turned to him. “And pray how do I and these my friends around me seem to you?’ “Thou art,” he said, “a most excellent Ape, and all these thy companions after thy example are excellent Apes too.” The King of the Apes, enraged at hearing these truths, gave him over to the teeth and claws of his companions.

The end.

replies(1): >>383210+aX
◧◩◪◨
243. mrangl+iF[view] [source] [discussion] 2023-11-20 13:00:42
>>arthur+6i
Corporate profits should be the driving force. Because then at least we know what (who) and where the controlling source is. Whereas "humanity" is a PR word for far-more fuzzy dark sources rooted in the political machine and its extensions, functionally speaking. The former is far more able to be influenced by actual humanity, ironically. Laws can be created and monitored that directly apply to said corporate force, if need be. Not so much for the political machine.
◧◩◪◨⬒⬓
244. nuance+rF[view] [source] [discussion] 2023-11-20 13:01:40
>>jiggaw+Ys
5 years from now, not only AI will be more advanced. Also techniques and machinery to make things will be more advanced. Just think about other existing technologic advancements and how absurdly 'ta-da' they would have sounded not too long ago.
◧◩◪◨
245. camill+OF[view] [source] [discussion] 2023-11-20 13:03:21
>>upward+rp
That, and all that can be traced back to Facebook, Instagram and social media’s impact on society. Not just the shallow dopamine issue, but also bigger problems such as facilitating genocide. I was a skeptic for a long time, but the more we see what Meta stands for, the more I believe Mark Zuckerberg’s companies have had anything but a massively negative impact on the world.
◧◩◪◨⬒⬓⬔⧯▣
246. HenryB+QF[view] [source] [discussion] 2023-11-20 13:03:30
>>cyanyd+xq
We do, but it takes a long time, and by the time we get to enforce the thing, the party is half-over. How many years was Microsoft playing around with IE as default browser? And they are still playing dirty games with Edge. It's not that they don't learn. It's that they will play the game until someone stops them, and then they will begin playing a different game.

Some people downvote (it's not about the points) but I merely state the reality and not my opinions.

I've made my living as a sys-admin early in my career using MS products, so thank you MS for putting food on my table. But this doesn't negate the dirty games/dark patterns/etc.

◧◩◪◨⬒
247. TeMPOr+RF[view] [source] [discussion] 2023-11-20 13:03:43
>>JBiser+Gv
> it seems to me that a straw is topologically equivalent to a torus, so it has 1 hole, right?

For a mathematician, yes. For everyone else, it obviously has two, because when you plug one end, only then it has one.

248. criley+UF[view] [source] 2023-11-20 13:04:01
>>9dev+(OP)
>I don’t quite buy your Cyberpunk utopia where the Megacorp finally rids us of those pesky ethics qualms

It's ironic because the only AI that doesn't have "pesky ethics qualms" are... literally the entire open source scene, all of the models on hugging face, etc...

All of the megacorps are the only safety and security happening in AI. I can easily run open source models locally and create all manner of political propaganda, I could create p^rnography of celebrities and politicians, or deeply racist or bigoted materials. The open source scene makes this trivial these days.

So to describe it as "Cyberpunk utopia where the Megacorp finally rids us of those pesky ethics qualms" when the open source scene has already done that today is just wild to me.

We have AI without ethics today, and every not-for-profit researcher, open source model and hacker with a cool script are behind it. If OpenAI goes back to being Open, they'll help supercharge the no-ethics AI reality of running models without corporate safety and ethics.

◧◩◪
249. camill+kG[view] [source] [discussion] 2023-11-20 13:06:25
>>UrineS+W5
Facilitating genocide in Myanmar, for one. Poisoning the wells of the Web with the worst kind of profiled advertisement the world has ever seen. Perfecting and optimizing the addiction mechanism of smartphones. Creating a mental health epidemic in younger women. I mean, the body of studies and malfeasances is out there, we just keep ignoring it.
◧◩◪◨⬒⬓
250. fragme+qG[view] [source] [discussion] 2023-11-20 13:07:07
>>suslik+zz
https://chat.openai.com/share/9b4f04f7-062f-40c3-b6a3-e972f7...

ChatGPT says "fuck" just fine.

replies(1): >>suslik+Ck1
◧◩◪◨⬒⬓
251. jampek+RH[view] [source] [discussion] 2023-11-20 13:16:31
>>insani+JD
Have you heard of e.g. East India companies? Or United Fruit?

Most of stock is not owned by individual persons (not that there aren't individuals that don't give a shit about enslaving people), but other companies and institutions that by charter prioritize profit. E.g. Microsoft's institutional ownership is around 70%.

replies(1): >>insani+iI
◧◩◪◨⬒⬓⬔
252. insani+iI[view] [source] [discussion] 2023-11-20 13:18:41
>>jampek+RH
The presence of unethical people does not imply that all people are unethical, only that people are different. And that's my point. Reducing a company to "they will always maximize shareholder value" is incorrect - for many, many companies that is simply not true.
replies(1): >>jampek+4N
◧◩◪◨⬒
253. mlrtim+tI[view] [source] [discussion] 2023-11-20 13:19:30
>>9dev+8y
There is no objective ethic considerations, furthermore the events unfolding now have absolutely 0 evidence that "ALL" ethic considerations are being thrown overboard.

Godwin's Law.

◧◩◪
254. Terrif+3L[view] [source] [discussion] 2023-11-20 13:30:09
>>stingr+Tk
Is their deal with Microsoft exclusive tech transfer wise? If not they can always sell/license what they have to Google, Facebook, and Amazon. They should be able to get quite a bit of money to last a while.
◧◩◪◨⬒⬓⬔⧯
255. jampek+4N[view] [source] [discussion] 2023-11-20 13:38:48
>>insani+iI
My point is that in the big picture ethics don't even matter, companies become something that transcend the individuals. Almost like algorithms that just happen to be implemented by humans (and exceedingly machines). There are no "they".
replies(1): >>insani+sO
◧◩◪◨⬒⬓⬔⧯▣
256. cyanyd+NN[view] [source] [discussion] 2023-11-20 13:41:39
>>imgabe+Vz
people capable of violence don't need to be smart, because they're capable of violence.

the point is, you cant rely on a scenario where society breaks down, that survivors will act more rational then than they do now.

◧◩◪◨⬒⬓⬔⧯▣
257. insani+sO[view] [source] [discussion] 2023-11-20 13:44:51
>>jampek+4N
That's just not true. Companies have individuals in charge with considerable power. Those individuals can absolutely make ethical decisions.
replies(1): >>jampek+9O1
◧◩◪◨⬒
258. FpUser+yR[view] [source] [discussion] 2023-11-20 13:56:23
>>jampek+2D
So either is fucked up. Why would we prefer one over the other? What's your point?
replies(1): >>jampek+e11
259. Zpalmt+GR[view] [source] 2023-11-20 13:56:43
>>9dev+(OP)
Don't think we should let crazy effective altruists hamstring development
◧◩◪◨
260. FpUser+VR[view] [source] [discussion] 2023-11-20 13:57:48
>>9dev+Wy
>"We're talking about a non-profit organisation that wants to keep the pace of scientific progress on AGI research slow enough to make time for society to gauge the ethical implications on an actual AGI, should it emerge."

Or so they say. I have no reason to trust them. It is not some little thing we are talking about

◧◩◪◨⬒⬓
261. m-p-3+9S[view] [source] [discussion] 2023-11-20 13:58:27
>>johnsi+oA
Still, they must be bleeding money with the humoungous amount of queries they get.
◧◩◪◨
262. Zpalmt+1T[view] [source] [discussion] 2023-11-20 14:01:13
>>arthur+6i
And that's a good thing
◧◩
263. dorfsm+DV[view] [source] [discussion] 2023-11-20 14:09:07
>>dalbas+Pw
Can you explain what you mean in your second to last paragraph?

The GNU project and the Wikimedia Foundation are still non profit today, and even if you disagree with their results their goal is to server humanity for free.

replies(1): >>dalbas+iS1
◧◩
264. 383210+aX[view] [source] [discussion] 2023-11-20 14:13:29
>>383210+VE
Plot twist:

https://nitter.net/ilyasut/status/1726590052392956028

“I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company.”

Nov 20, 2023 · 1:15 PM UTC - Ilya S.

◧◩◪◨⬒
265. Zpalmt+TX[view] [source] [discussion] 2023-11-20 14:16:04
>>injeol+u9
I use openAI via API access and ChatGPT/gpt-4/gpt-4 turbo are still very censored. text-davinci-003 is the most uncensored model I have found that is still reasonably usable.
◧◩◪
266. denton+oY[view] [source] [discussion] 2023-11-20 14:17:35
>>staunt+9D
You lose the bet. I can name three off the top of my head.
replies(1): >>staunt+9Q2
◧◩◪◨⬒⬓
267. jampek+e11[view] [source] [discussion] 2023-11-20 14:26:48
>>FpUser+yR
Maybe having a dictator or profit motive running the show is not a binary choice?
◧◩◪
268. johann+B21[view] [source] [discussion] 2023-11-20 14:31:44
>>gwd+jE
> have you heard of lead additives to gasoline? CFCs? Asbestos? Smoking? History is littered with complete failures of governments to appropriately regulate new technology

Most of those problems have been solved or at least been reduced by regulation. Regulators however aren't all knowing gods and one finds out about risks and problems only later, but except for smoking regulators have covered those aspects (and anti-smoking laws become stricter, generally, depending on country, regularly, but it's a cultural habit older than most states ...)

◧◩
269. meigwi+k41[view] [source] [discussion] 2023-11-20 14:38:33
>>imgabe+J1
Capital punishment exists in many countries, but still fails to dissuade many people of murder.

It's not about wanting to destroy the world, but short term greed whose consequences destroy the world.

270. JKCalh+H61[view] [source] 2023-11-20 14:47:26
>>9dev+(OP)
When I read that Sam and Greg joined Microsoft I assume the two had already been in talks with Microsoft for some time now.

I assumed it was their entertaining offers from Microsoft that got Sam the ax from the OpenAI board.

◧◩◪◨⬒⬓⬔⧯▣▦
271. hashha+o91[view] [source] [discussion] 2023-11-20 15:00:44
>>insani+VD
Parent meant probably meant that there's no securities fraud since no securities are involved as it's not a traded company.
replies(1): >>insani+rZ1
◧◩◪◨⬒
272. Eggpan+pb1[view] [source] [discussion] 2023-11-20 15:13:33
>>jampek+yw
There is no such law, ChatBLT told me so. But seriously, there isn’t because it so vague. Short term vs long term profits alone produces so much wiggle room alone that if such a law existed it would be meaningless.
◧◩◪◨⬒
273. Eggpan+vb1[view] [source] [discussion] 2023-11-20 15:14:18
>>jampek+yw
There is no such law, ChatBLT told me so. But seriously, there isn’t because it so vague. Short term vs long term profits alone produces so much wiggle room that even if such a law existed it would be meaningless.
◧◩◪◨⬒⬓⬔⧯
274. Solven+ge1[view] [source] [discussion] 2023-11-20 15:33:02
>>cyanyd+Iv
Really what we're observing with these people is the survivorship bias of humans with astounding levels of cognitive dissonance — which nearly all humans have. Except they have the rare combination of wealth and luck on their side...until it runs out.
◧◩◪◨
275. wredue+yh1[view] [source] [discussion] 2023-11-20 15:54:16
>>joenot+7v
Dude no. Companies literally start wars and have us peasants murdered.

A journalist was car bombed in broad daylight.

If you push the wrong buttons of trillion dollar corporations, they just off you can continue with business as usual.

If Microsoft sees trillions of dollars in ending all of your work, they’ll take it in a heart beat.

◧◩◪◨
276. PH95Vu+Hh1[view] [source] [discussion] 2023-11-20 15:55:36
>>dalore+ct
that sentence makes no sense to me, what is a straw here?
◧◩◪◨⬒⬓⬔
277. suslik+Ck1[view] [source] [discussion] 2023-11-20 16:12:27
>>fragme+qG
Yes, naturally. But both you and I know exactly what I meant by this hyperbole.
◧◩◪◨
278. slg+wo1[view] [source] [discussion] 2023-11-20 16:32:38
>>RcouF1+dt
>You know what is an even bigger temptation to people than money - power.

Do you think profit minded people and organizations aren't motivated by a desire for power? Removing one path to corruption doesn't mean I think it is impossible for a non-profit to become corrupted, but it is one less thing pulling them in that direction.

◧◩◪◨⬒
279. two_in+xv1[view] [source] [discussion] 2023-11-20 17:03:42
>>gremli+Ty
> these were post-Soviet, American-style "democratic" leaders

Before that USSR collapsed under Gorbachev. Why? They simply lost with their planned economy where nobody wants to take a risk. Because (1) it's not rewarding, (2) no individual has enough resources (3) to get thing moving they will have to convince a lot of bureaucrats who don't want to take a risk. They moved forward thanks to few exceptional people. But there wasn't as many willing to take a risk as in 'rotting' capitalism. Don't know why, but leaders didn't see Chinese way. Probably they were busy with internal rats fights and didn't see what's in it for them.

My idea is that there are two extremes. On left side people can be happy like yogs. But they don't produce anything or move forward. On the right side is pure capitalism. Which is inhuman. The optimum is somewhere in between. With good life quality and fast progress. What happens when resources are shared too much and life is good? You can see it in Germany today. 80% of Ukrainian refugees don't works and don't want to.

280. dang+gA1[view] [source] 2023-11-20 17:18:48
>>9dev+(OP)
We detached this subthread from >>38344458 . Nothing wrong with your comment—I'm just trying to prune the heaviest subthreads.
◧◩◪◨⬒⬓⬔
281. cyanyd+GD1[view] [source] [discussion] 2023-11-20 17:29:03
>>noprom+Az
the existential fear of billionaires appears to be that they won't have things rather than life.
◧◩◪◨⬒⬓⬔⧯▣▦
282. jampek+9O1[view] [source] [discussion] 2023-11-20 18:05:22
>>insani+sO
E.g. Henry Ford tried to make an ethical decision for the company to cut some dividends to benefit workers and make the products cheaper. It was ruled illegal. His mistake actually was to say that the benefits would about more than profit; arguing the investment on shareholder profit grounds could well have passed.

Probably safe to say Henry Ford had considerable power in Ford Motor Co compared most executives today?

https://en.wikipedia.org/wiki/Dodge_v._Ford_Motor_Co.

replies(1): >>insani+6Z1
◧◩◪
283. dalbas+iS1[view] [source] [discussion] 2023-11-20 18:19:31
>>dorfsm+DV
I'm not criticizing these projects, their current legal structure.

What I mean is that these were created as public goods and functioned as such. Each had unique way of being open, spreading the value of their work as far as possible.

They were extraordinary. Incredible quality. Incredible power. Incredible ability to be built upon.. particularly the WWW.

All achieved things that simply could not have been achieved, by being a normal commercial venture.

Google,fb and co essentially stole them. They built closed platforms built a top open ones. Built bridges between users and the public domain, and monopolize them like bridge trolls.

Considering how part of the culture, a company like Google was 20 years ago this is the treason.

◧◩◪
284. sander+pS1[view] [source] [discussion] 2023-11-20 18:19:59
>>gwd+jE
Your response is exactly what I had in mind when I referred to people who are "skeptical of politics and trust the big brains at OpenAI more".

You aren't wrong that government regulation is not a great solution, but I believe it is - like democracy, and for the same reasons - the worst solution, except for all the others.

I don't disagree that using a non-profit to enforce self-regulation was "worth a shot", but I thought it was very unlikely to succeed at that goal, and indeed has been failing to succeed at that goal for a very long time. But I'm not mad at them for trying.

(I do think too many people used this as an excuse to argue against any government oversight by saying, "we don't need that, we have a self-regulating non-profit structure!", I think mostly cynically.)

> But it at least gives a chance that a potentially dangerous technology will go in the right direction.

I know you wrote this comment a full five hours ago and stuff has been moving quickly, but I think this needs to be in the past tense. It appears to be clear now that something approaching >90% of the OpenAI staff did not believe in this mission, and thus it was never going to work.

If you care about this, I think you need to be thinking about what else to pursue to give us that chance. I personally think government regulation is the only plausible option to pursue here, but I won't begrudge folks who want to keep trying more novel ideas.

(And FWIW, I don't personally share the humanity-destroying concerns people have; but I think regulation is almost always appropriate for big new technologies to some degree, and that this is no exception.)

◧◩◪◨⬒⬓⬔⧯▣▦▧
285. insani+6Z1[view] [source] [discussion] 2023-11-20 18:43:31
>>jampek+9O1
> Probably safe to say Henry Ford had considerable power in Ford Motor Co compared most executives today?

That is not actually true, necessarily. Your power is typically very term dependent. A CEO who is also president of the board, and a majority shareholder, has far more power than a CEO who just stepped in temporarily and has only the powers provided by the by-laws.

Regardless, the solution to "I want to do something ethical that is not strictly in the company's best interest" is to make the case that it is the company's best interest. For example, "By investing in our employees we are actually prioritizing shareholder value". If you position it as "this is a move that hurts shareholders", of course that's illegal - companies have an obligation to every shareholder.

That also means that if you give your employees stock, they now have investor rights too. You can structure your company this way from the start, it's trivial and actually the norm in tech - stock is handed out to many employees.

◧◩◪◨⬒⬓⬔⧯▣▦▧
286. insani+rZ1[view] [source] [discussion] 2023-11-20 18:44:32
>>hashha+o91
The shareholders are still invested, they still have a 401A Evaluation, and these statements are definitely going to have legal weight.
◧◩◪◨
287. staunt+9Q2[view] [source] [discussion] 2023-11-20 22:18:26
>>denton+oY
Who? (if you don't want to name anyone, any hint how to find them? I personally only know one...)
◧◩◪◨⬒⬓⬔
288. fourth+kC3[view] [source] [discussion] 2023-11-21 03:24:32
>>wheele+Yf
What.
◧◩◪◨⬒⬓⬔
289. dang+SU3[view] [source] [discussion] 2023-11-21 05:45:53
>>thworp+Sj
Could you please stop posting unsubstantive comments and flamebait? You've unfortunately been doing it repeatedly (e.g. >>37336350 ). It's not what this site is for, and destroys what it is for.

If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.

◧◩◪◨⬒
290. dalore+3I4[view] [source] [discussion] 2023-11-21 12:44:19
>>visarg+jA
Now follow up with: how many holes do trousers have?
[go to top]