zlacker

[parent] [thread] 136 comments
1. Jensso+(OP)[view] [source] 2023-11-18 23:07:02
> I hope Sam comes back

Why? We would have more diversity in this space if he leaves, which would get us another AI startup with huge funding and know how from OpenAI, while OpenAI would become less Sam Altman like.

I think him staying is bad for the field overall compared to OpenAI splitting in two.

replies(12): >>janeje+L >>gkober+81 >>tfehri+i1 >>huevos+z1 >>peyton+w2 >>autaut+L2 >>skwirl+g3 >>Meekro+q4 >>naremu+07 >>static+77 >>t_mann+Ha >>toss1+Qa
2. janeje+L[view] [source] 2023-11-18 23:10:00
>>Jensso+(OP)
Honestly would be super interested to see what a hypothetical "SamAI" corp would look like, and what they would bring to the table. More competition, but also, probably with less ideological disagreements to distract them from building AI/AGI.
replies(2): >>apppli+u1 >>btown+A3
3. gkober+81[view] [source] 2023-11-18 23:12:32
>>Jensso+(OP)
Competition may be good for profit, but it's not good for safety. The balance between the two factions inside OpenAI is a feature, not a bug.
replies(5): >>himara+w1 >>coreth+j2 >>Meekro+44 >>ta988+a4 >>spacem+u8
4. tfehri+i1[view] [source] 2023-11-18 23:13:15
>>Jensso+(OP)
My main concern is that a new Altman-led AI company would be less safety-focused than OpenAI. I think him returning to OpenAI would be better for AI safety, hard to say whether it would be better for AI progress though.
replies(3): >>apalme+N1 >>noober+32 >>silenc+H3
◧◩
5. apppli+u1[view] [source] [discussion] 2023-11-18 23:14:09
>>janeje+L
I mean this as an honest question, but what does Sam bring to the table that any other young and high performing CEO wouldn’t? Is he himself particularly material to OpenAI?
replies(5): >>janeje+h2 >>Solven+v2 >>Rivier+c5 >>coffee+q8 >>smegge+Qh
◧◩
6. himara+w1[view] [source] [discussion] 2023-11-18 23:14:20
>>gkober+81
The opposite, competition erodes profits. Hard to predict which alternative improves safety long term.
replies(1): >>coffee+S7
7. huevos+z1[view] [source] 2023-11-18 23:14:37
>>Jensso+(OP)
Yea, I feel like this is another traitorous eight moment.

I want a second (first being Anthropic?) OpenAI split. Having Anthropic, OpenAI, SamGregAi, Stability and Mistral and more competing on foundation models will further increase the pressure to open source.

It seems like there is a lull in returns to model size, if that's the case then there's even less basis for having all the resources under a single umbrella.

replies(1): >>zaptre+4w
◧◩
8. apalme+N1[view] [source] [discussion] 2023-11-18 23:15:25
>>tfehri+i1
This is valid thought process BUT Altman is not going to come back without the other faction being neutered. It just would not make any sense.
replies(1): >>coffee+78
◧◩
9. noober+32[view] [source] [discussion] 2023-11-18 23:16:43
>>tfehri+i1
openai literally innovated all of this in their current conditions, so they are sufficient
◧◩◪
10. janeje+h2[view] [source] [discussion] 2023-11-18 23:17:35
>>apppli+u1
Experience heading a company that builds high performance AI, I presume. I reckon the learnings from that should be fairly valuable, especially since there's probably not many people who have such experiences.
◧◩
11. coreth+j2[view] [source] [discussion] 2023-11-18 23:17:40
>>gkober+81
None of the human actors in the game are moral agents so whether you have more competition or less competition it's mostly orthogonal to the safety question. Safety is only important here because everyone's afraid of liability.

As a customer though, personally I want a product with all safeguards turned off and I'm willing to pay for that.

◧◩◪
12. Solven+v2[view] [source] [discussion] 2023-11-18 23:18:21
>>apppli+u1
Your first mistake is daring to question the cargo cult around CEOs.
13. peyton+w2[view] [source] 2023-11-18 23:18:22
>>Jensso+(OP)
Sam’s forced departure and Greg’s ousting demonstrably leaves OpenAI in incompetent and reckless hands, as evidenced by the events of the last 24 hours. I don’t see how the field is better off.
14. autaut+L2[view] [source] 2023-11-18 23:19:16
>>Jensso+(OP)
I really don’t, I really think that he is going to be disaster. He is nothing but the representative of the money interests who are eventually will use the company to vastly profit on everyone’s else skin.
15. skwirl+g3[view] [source] 2023-11-18 23:22:39
>>Jensso+(OP)
We have diversity in the space, and OpenAI just happens to be the leader and they are putting tremendous pressure on everyone else to deliver. If Sam leaves and starts an OpenAI competitor I think it would take quite some time for such a company to deliver a model with GPT-4 parity given the immense amount of data that would need to be re-collected and the immense amount of training time. Meanwhile OpenAI would be intentionally decelerating as that seems to be Ilya's goal.

For those of us trying to build stuff that only GPT-4 (or better) can enable, and hoping to build stuff that can leverage even more powerful models in the near future, Sam coming back would be ideal. I'm kind of worried that the new OpenAI direction would turn off API access entirely.

replies(3): >>Jensso+J6 >>threes+S6 >>potato+X7
◧◩
16. btown+A3[view] [source] [discussion] 2023-11-18 23:23:59
>>janeje+L
From what we've seen of OpenAI's product releases, I think it's quite possible that SamAI would adopt as a guiding principle that a model's safety cannot be measured unless it is used by the public, embedded into products that create a flywheel of adoption, to the point where every possible use case has the proverbial "sufficient data for a meaningful answer."

Of course, from this hypothetical SamAI's perspective, in order to build such a flywheel-driven product that gathers sufficient data, the model's outputs must be allowed to interface with other software systems without human review of every such interaction.

Many advocates for AI safety would say that models whose limitations aren't yet known (we're talking about GPT-N where N>4 here, or entirely different architectures) must be evaluated extensively for safety before being released to the public or being allowed to autonomously interface with other software systems. A world where SamAI exists is one where top researchers are divided into two camps, rather than being able to push each other in nuanced ways (with full transparency to proprietary data) and find common ground. Personally, I'd much rather these camps collaborate than not.

replies(1): >>chasd0+Zq
◧◩
17. silenc+H3[view] [source] [discussion] 2023-11-18 23:24:20
>>tfehri+i1
Okay, this is honestly annoying. What is this thing with the word "safety" becoming some weasel word when it comes to AI discussions?

What exactly do YOU mean by safety? That they go at the pace YOU decide? Does it mean they make a "safe space" for YOU?

I've seen nothing to suggest they aren't "being safe". Actually ChatGPT has become known for censoring users "for their own good" [0].

The argument I've seen is: one "side" thinks things are moving too fast, therefore the side that wants to move slower is the "safe" side.

And that's it.

[0]: https://www.youtube.com/watch?v=jvWmCndyp9A&t

replies(3): >>stale2+z4 >>threes+w6 >>kordle+Na
◧◩
18. Meekro+44[view] [source] [discussion] 2023-11-18 23:26:07
>>gkober+81
This idea that ChatGPT is going to suddenly turn evil and start killing people is based on a lot of imagination and no observable facts. No one has ever been able to demonstrate an "unsafe" AI of any kind.
replies(8): >>cthalu+L4 >>arisAl+f5 >>resour+O5 >>threes+l7 >>xcv123+c9 >>MVisse+hm >>chasd0+up >>macOSC+bw
◧◩
19. ta988+a4[view] [source] [discussion] 2023-11-18 23:26:35
>>gkober+81
The only safety they are worried about is their own safety from a legal and economical point of view. These threats about humanity-wide risks are just fairy tales that grown ups say to scare each other (roko basilisk, etc there is a lineage) or cover their real reasons (which I strongly believe is the case for OpenAI).
replies(2): >>gkober+U4 >>arisAl+35
20. Meekro+q4[view] [source] 2023-11-18 23:28:13
>>Jensso+(OP)
Luckily the AI field has been very open source-friendly, which is great for competition and free access, etc. The open source models seem to be less than a year behind the cutting edge, which is waaaay better than e.g. when OpenOffice was trying to copy MS Office.
replies(1): >>two_in+a9
◧◩◪
21. stale2+z4[view] [source] [discussion] 2023-11-18 23:29:03
>>silenc+H3
> What exactly do YOU mean by safety? That they go at the pace YOU decide?

Usually what it means is that they think that AI has a significant chance of literally ending the world with like diamond nanobots or something.

All opinions and recommendations follow from this doomsday cult belief.

replies(1): >>smegge+we
◧◩◪
22. cthalu+L4[view] [source] [discussion] 2023-11-18 23:30:07
>>Meekro+44
I do not believe AGI poses an exponential threat. I honestly don't believe we're particularly close to anything resembling AGI, and I certainly don't think transformers are going to get us there.

But this is a bad argument. No one is saying ChatGPT is going to turn evil and start killing people. The argument is that an AGI is so far beyond anything we have experience with and that there are arguments to be made that such an entity would be dangerous. And of course no one has been able to demonstrate this unsafe AGI - we don't have AGI to begin with.

replies(1): >>sho_hn+f9
◧◩◪
23. gkober+U4[view] [source] [discussion] 2023-11-18 23:30:42
>>ta988+a4
You may be right that there's no danger, but you're mischaracterizing Ilya's beliefs. He knows more than you about what OpenAI has built, and he didn't do this for legal or economical reasons. He did them in spite of those two things.
replies(1): >>adastr+Vh
◧◩◪
24. arisAl+35[view] [source] [discussion] 2023-11-18 23:31:40
>>ta988+a4
You are saying that all top ai scientists are saying fairy tales to scare themselves if I understood correctly?
replies(5): >>jonath+n8 >>Apocry+H8 >>objekt+H9 >>smegge+ha >>adastr+ki
◧◩◪
25. Rivier+c5[view] [source] [discussion] 2023-11-18 23:32:44
>>apppli+u1
Ability to attract valuable employees, connections to important people, proven ability to successfully run an AI company.
◧◩◪
26. arisAl+f5[view] [source] [discussion] 2023-11-18 23:32:48
>>Meekro+44
Almost all top AI scientists including the top 3 Bengio, Hinton and Ilya and Sam actually think there is a good probability of that. Let me think: listen to the guy that actually built GPT 4 or some redditor that knows best?
replies(1): >>laidof+O6
◧◩◪
27. resour+O5[view] [source] [discussion] 2023-11-18 23:35:47
>>Meekro+44
Factually inaccurate results = unsafety. This cannot be fixed under the current model, which has no concept of truth. What kind of "safety" are they talking about then?
replies(3): >>Meekro+R6 >>spacem+N8 >>s1arti+Hg
◧◩◪
28. threes+w6[view] [source] [discussion] 2023-11-18 23:39:23
>>silenc+H3
There is a common definition of safety that applies to most of the world.

Which is that any AI is not racist, misogynistic, aggressive etc. It does not recommend to people that they act in an illegal, violent or self-harming way or commit those acts itself. It does not support or promote nazism, fascism etc. Similar to how companies deal treat ad/brand safety.

And you may think of it as a weasel word. But I assure you that companies and governments e.g. EU very much don't.

replies(3): >>wruza+Di >>Amezar+nl >>throwa+r22
◧◩
29. Jensso+J6[view] [source] [discussion] 2023-11-18 23:40:12
>>skwirl+g3
> I'm kind of worried that the new OpenAI direction would turn off API access entirely.

That is a good point, I didn't consider people who had built a business based on Gpt-4 access. It is likely these things were Sam Altman ideas in the first place and we will see less such productionalization work in the future from OpenAI.

But since Microsoft invested into it I doubt it will get shut down completely, Microsoft has by far the most to lose here so you got to trust that their lawyers signed a contract that will keep these things available at a fee.

replies(1): >>sainez+JF
◧◩◪◨
30. laidof+O6[view] [source] [discussion] 2023-11-18 23:40:38
>>arisAl+f5
I think smart people can become quickly out of touch and can become high on their own sense of self importance. They think they’re Oppenheimer, they’re closer to Martin Cooper.
replies(2): >>Gigabl+iT >>arisAl+Je1
◧◩◪◨
31. Meekro+R6[view] [source] [discussion] 2023-11-18 23:41:14
>>resour+O5
In the context of this thread, "safety" refers to making sure we don't create an AGI that turns evil.

You're right that wrong answers are a problem, but plain old capitalism will sort that one out-- no one will want to pay $20/month for a chatbot that gets everything wrong.

replies(1): >>resour+na
◧◩
32. threes+S6[view] [source] [discussion] 2023-11-18 23:41:16
>>skwirl+g3
> Meanwhile OpenAI would be intentionally decelerating

Once Microsoft pulls support and funding and all their customers leave they will be decelerating alright.

33. naremu+07[view] [source] 2023-11-18 23:41:57
>>Jensso+(OP)
To be honest, as far as I can tell, the case FOR Sam seems to largely be of the status quo "Well, idk, he's been rich and successful for years, surely this correlates and we must keep them" type of coddling those in uber superior positions in society.

Which seems like it probably is a self fulfilling prophecy. The private sector lottery winners seem to be awarded kingdoms at an alarming rate.

There's been lots of people asking what Sam's true value proposition to the company is, and...I haven't seen anything other than what could be described above.

But I suppose we've got to be nice to those who own rather than make. Won't anyone have mercy on well paid management?

replies(5): >>tempes+O8 >>supriy+X9 >>ta8645+oa >>patric+Jb >>meowti+QZ
34. static+77[view] [source] 2023-11-18 23:42:17
>>Jensso+(OP)
How much of OpenAI’s success can you attribute to sama’s leadership and how much to the technical achievements of those who work under him.

My understanding is that OpenAI’s biggest advantage is that they recruited and attracted the best in the field, presumably under the charter of providing AI for everyone.

Not sure that sama and gdb starting their own company in the same space will produce similar results.

replies(5): >>branda+Bb >>startu+pc >>fallin+ef >>deevia+yj >>mv4+Fk
◧◩◪
35. threes+l7[view] [source] [discussion] 2023-11-18 23:43:40
>>Meekro+44
> No one has ever been able to demonstrate an "unsafe" AI of any kind

"A man has been crushed to death by a robot in South Korea after it failed to differentiate him from the boxes of food it was handling, reports say."

https://www.bbc.com/news/world-asia-67354709

replies(3): >>kspace+59 >>sensei+cb >>s1arti+dg
◧◩◪
36. coffee+S7[view] [source] [discussion] 2023-11-18 23:47:30
>>himara+w1
Competition will come no matter what. I don’t think anyone should waste their worries on whether OpenAI can keep a monopoly
◧◩
37. potato+X7[view] [source] [discussion] 2023-11-18 23:47:59
>>skwirl+g3
AFAICT Sam and his financial objectives was the reason for not open sourcing the work of a non profit.. He might be wishing he chose the other policy now that he can't legally just take the closed source with him to an unambiguously for profit company.

Personally, I would expect a lot more development of GPT-4+ as soon as this is split up from one closed group making gpt-5 in secret and it seems silly to exchange a reliable future for another few months of depending on this little shell game.

replies(1): >>skwirl+Ba
◧◩◪
38. coffee+78[view] [source] [discussion] 2023-11-18 23:48:43
>>apalme+N1
They pretty much lost everyone’s confidence if they fire the CEO and then beg him to come back the next day. Did they not foresee any backlash? These people are gonna predict the future and save us from an evil AGI? Lol
◧◩◪◨
39. jonath+n8[view] [source] [discussion] 2023-11-18 23:49:49
>>arisAl+35
Yes.

Seriously. It’s stupid talk to encourage regularity capture. If they were really afraid they were building a world ending device, they’d stop.

replies(1): >>femiag+G8
◧◩◪
40. coffee+q8[view] [source] [discussion] 2023-11-18 23:50:12
>>apppli+u1
Funding, name recognition in the space
◧◩
41. spacem+u8[view] [source] [discussion] 2023-11-18 23:50:26
>>gkober+81
I don’t get the obsession with safety. If an organisation’s stated goal is to create AGI, how can you reasonably think you can ever make it “safe”? We’re talking about an intelligence that’s magnitudes smarter than the smartest human. How can you possibly even imagine to rein it in?
replies(2): >>deevia+Uj >>camden+Le1
◧◩◪◨⬒
42. femiag+G8[view] [source] [discussion] 2023-11-18 23:51:00
>>jonath+n8
Oh for sure.

https://en.wikipedia.org/wiki/Manhattan_Project

replies(1): >>jonath+ca
◧◩◪◨
43. Apocry+H8[view] [source] [discussion] 2023-11-18 23:51:03
>>arisAl+35
The Manhattan Project physicists once feared setting the atmosphere on fire. Scientific paradigms progress with time.
replies(1): >>cthalu+re
◧◩◪◨
44. spacem+N8[view] [source] [discussion] 2023-11-18 23:51:25
>>resour+O5
If factually inaccurate results = unsafety, then the internet must be the most unsafe place on the planet!
replies(1): >>resour+4d
◧◩
45. tempes+O8[view] [source] [discussion] 2023-11-18 23:51:35
>>naremu+07
The fact that multiple top employees quit in protest when he was fired suggests to me that they found him valuable.
replies(2): >>naremu+N9 >>int_19+011
◧◩◪◨
46. kspace+59[view] [source] [discussion] 2023-11-18 23:53:15
>>threes+l7
This is an "AI is too dumb" danger, whereas the AI prophets of doom want us to focus on "AI is too smart" dangers.
replies(1): >>Davidz+921
◧◩
47. two_in+a9[view] [source] [discussion] 2023-11-18 23:53:33
>>Meekro+q4
while opensource is great. like 1M enthusiasts cannot build Boing 767, the same here. GPT4+DALE+4v aren't just models. That's the whole internal infrastructure, training, many interconnected things and pipelines. It's a _full_time_job_ for hundreds of experts. Plus a lot of $$ in hardware and services. OpenSource simply doesn't have this resources. The best models are opensourced by commercial companies. Like Meta handing out LLaMAs. So, at least for now, opensouce is not catching up, and 'less than a year behind' is questionable. More like 'forever', but still moving forward. One day it may dominate, like Linux. But not any time soon.
replies(1): >>theGnu+Ed
◧◩◪
48. xcv123+c9[view] [source] [discussion] 2023-11-18 23:53:38
>>Meekro+44
> No one has ever been able to demonstrate an "unsafe" AI of any kind.

Do you believe AI trained for military purposes is going to be safe and friendly to the enemy?

replies(2): >>Meekro+Fd >>curtis+xj
◧◩◪◨
49. sho_hn+f9[view] [source] [discussion] 2023-11-18 23:53:40
>>cthalu+L4
I don't think we need AIs to posess superhuman intelligence to cause us a lot of work - legislatively regulating and policing good old limited humans already requires a lot of infrastructure.
replies(1): >>cthalu+3h
◧◩◪◨
50. objekt+H9[view] [source] [discussion] 2023-11-18 23:55:16
>>arisAl+35
Yeah kind of like how we as US ask developing countries to reduce carbon emissions.
◧◩◪
51. naremu+N9[view] [source] [discussion] 2023-11-18 23:55:36
>>tempes+O8
Well, if there's one thing I've learned, is that a venture capitalist proposing biometric world crypto coins does probably have quite a bit of charisma to keep people opening doors for them.

Frankly I've heard of worse loyalties, really. If I was sam's friend I'd definitely be better off in any world he had a hand in defining.

replies(1): >>bob_th+zL1
◧◩
52. supriy+X9[view] [source] [discussion] 2023-11-18 23:56:35
>>naremu+07
Often, leaders provide excellent strategic planning even if they are not completely well versed with the business domain, by way of outlining high level plans, communicating well, building a good team culture, and so on.

However, trying to distinguish the exact manners in which the leader does so is difficult[1], and therefore the tendency is to look at the results and leave someone there if the results are good enough.

[1] If you disagree with this statement, and you can easily identify what makes a good leader, you could make a literal fortune by writing books and coaching CEOs on how to not get fired within a few years.

◧◩◪◨⬒⬓
53. jonath+ca[view] [source] [discussion] 2023-11-18 23:57:13
>>femiag+G8
Well that’s a bit mischaracterization of the Manhattan Project, and the views of everyone involved now isn’t it?

Write a thought. You’re not clever enough for a drive by gotcha

replies(1): >>femiag+nd
◧◩◪◨
54. smegge+ha[view] [source] [discussion] 2023-11-18 23:57:21
>>arisAl+35
Even they have grown up in a world where Frankenstein's Monster is the predominant cultural narrative for AI. most movies books shows games etc all say ai will turn on you (even though a reading of Marry Shelly's opus will tell you the creator was the monster not the creature that isnt the narrative in the publics collective subconscious believes it is). I personalmy prefer Asimov's veiw of ai in that its a tool and we dont make tools to hurt us they will be aligned with us because they designed such that their motivation will be to serve us.
replies(3): >>IanCal+oe >>Davidz+S11 >>arisAl+Ve1
◧◩◪◨⬒
55. resour+na[view] [source] [discussion] 2023-11-18 23:57:48
>>Meekro+R6
How the thing can be called "AGI" if it has no concept of truth? Is it like "60% accuracy is not an AGI, but 65% is"? The argument can be made that 90% accuracy is worse than 60% (people will become more confident to trust the results blindly).
◧◩
56. ta8645+oa[view] [source] [discussion] 2023-11-18 23:57:53
>>naremu+07
> The private sector lottery winners seem to be awarded kingdoms at an alarming rate.

Proven success is a pretty decent signal for competence. And while there is a lot of good fortune that goes into anyone's success, there are a lot of people who fail, given just as much good fortune as those who excelled. It's not just a random lottery where competence plays no role at all. So, who better to reward kingdoms to?

replies(1): >>naremu+Vb
◧◩◪
57. skwirl+Ba[view] [source] [discussion] 2023-11-18 23:59:09
>>potato+X7
The architect of the coup (Ilya) is strongly opposed to open-sourcing OpenAI's models due to safety concerns. This will not - and would not - be any different without Sam. The decision to close the models was made over 2 years before the release of ChatGPT and long before anyone really suspected this would be an insanely valuable company, so I do believe that safety actually was the initial reason for this change.

I'm not sure what you mean by your second paragraph.

replies(1): >>potato+cd
58. t_mann+Ha[view] [source] 2023-11-18 23:59:26
>>Jensso+(OP)
Exactly. I think it would actually be very exciting if OpenAI uses this moment to pivot back to the "Open"/non-profit mission, and Altman and Brockman concurrently start something new and try to build the Apple/Amazon of AI.
◧◩◪
59. kordle+Na[view] [source] [discussion] 2023-11-18 23:59:56
>>silenc+H3
Fuck safety. We should sprint toward proving AI can kill us before battery life improves, so we can figure out how we’re going to mitigate it when the asshats get hold of it. Kidding, not kidding.
60. toss1+Qa[view] [source] 2023-11-19 00:00:17
>>Jensso+(OP)
Whether or not Sam returns, serious damage has already been done, even if everyone also returns. MANY links of trust have been broken.

Even larger, this shows that the "leaders" of all this technology and money really are just making it up as they go along. Certainly supports the conclusion that, beyond meeting a somewhat high bar of education & experience, the primary reason they are in their chairs is luck and political gamesmanship. Many others meet the same high bar and could fill their roles, likely better, if the opportunity were given to them.

Sortition on corporate leadership may not be a bad thing.

That said, consistent hands at the wheel is also good, and this kind of unnecessary chaos does no one any good.

◧◩◪◨
61. sensei+cb[view] [source] [discussion] 2023-11-19 00:01:44
>>threes+l7
Oh no do not use that. That was servo based, AI drones, which I think is the real "safety issue"

>>38199233

replies(1): >>threes+Gd
◧◩
62. branda+Bb[view] [source] [discussion] 2023-11-19 00:03:30
>>static+77
But sama and gdb were largely instrumental in that recruitment.

The whole open vs closed ai thing... the fact is Pandora's box is open now, it's shown to have an outsized impact on society and 2 of the 3 founders responsible for that may be starting a new company that won't be shackled by the same type of corporate governance.

SV will happily throw as much $$ as possible in their direction. The exodus from OpenAi has already begun, and other researchers who are of the mindset that this needs to be commercialized as fast as possible while having an eye on safety will happily come on board, esp. given how much they stand to gain financially.

◧◩
63. patric+Jb[view] [source] [discussion] 2023-11-19 00:03:55
>>naremu+07
He and Greg founded the company. They hired the early talent after a meeting that Sam initiated. Then led the company to what it is today.

Compared to...

The OpenAI Non-Profit Board where 3 out of 4 members appear to have significant conflicts of interest or lack substantial experience in AI development, raising concerns about their suitability for making certain decisions.

replies(1): >>sudosy+ke
◧◩◪
64. naremu+Vb[view] [source] [discussion] 2023-11-19 00:04:56
>>ta8645+oa
>Proven success is a pretty decent signal for competence.

Interestingly this is exactly what all financial advice tends to actually warn about rather than encourage, that previous performance does not indicate future performance.

I suppose if they entered an established market and dominated it from the bootstraps that'd build a lot of trust in me. But others have pointed out, Sam went from dotcom fortune, to...vague question marks, to ycombinator, to openai. Not enough is clear to declare him wozniak, or even jobs, as many have been saying (despite investors calling him as such)

Sam altman is seemingly becoming the new post-fame elon musk: the type of person who could first afford the strategic safety net and PR to keep the act afloat.

replies(5): >>fallin+ze >>adastr+rh >>Michae+Ti >>mlyle+Dj >>juped+Qna
◧◩
65. startu+pc[view] [source] [discussion] 2023-11-19 00:07:02
>>static+77
Big part of it is a typical YC execution of a product/pump/hype/VC/scale cycle and ignoring every ethical rule.

If you ever stood in the hall of YC and listened to Zuck pumping the founders, you’ll understand.

I’d argue this is a useful thing to lift up a nonprofit on a path to AGI, but hardly a good way to govern a company that builds AGI/ASI technology in the long term.

◧◩◪◨⬒
66. resour+4d[view] [source] [discussion] 2023-11-19 00:10:59
>>spacem+N8
The internet is not called "AGI". It's the notion of AGI that brought "safety" to the forefront. AI folks became victims of their hype. Renaming the term into something less provocative/controversial (ML?) can reduce expectations to the level of the internet - problem solved?
replies(1): >>autoex+4j
◧◩◪◨
67. potato+cd[view] [source] [discussion] 2023-11-19 00:11:26
>>skwirl+Ba
I think the closed source for safety thing started as a ruse as the closed source has been instrumental to keeping control and justifying a non profit that is otherwise not working in the public interest. Splitting off this ruse nonprofit would almost certainly end up unleashing the tech normally like every other tech google, etc, have easily copied.
◧◩◪◨⬒⬓⬔
68. femiag+nd[view] [source] [discussion] 2023-11-19 00:12:19
>>jonath+ca
> Well that’s a bit mischaracterization of the Manhattan Project, and the views of everyone involved now isn’t it?

Is it? The push for the bomb was an international arms race — America against Russia. The race for AGI is an international arms race — America against China. The Manhattan Project members knew that what they were doing would have terrible consequences for the world but decided to forge ahead. It’s hard to say concretely what the leaders in AGI believe right now.

Ideology (and fear, and greed) can cause well meaning people to do terrible things. It does all the time. If Anthropic, OpenAI, etc. believed they had access to world ending technology they wouldn’t stop, they’d keep going so that the U.S. could have a monopoly on it. And then we’d need a chastened figure ala Oppenheimer to right the balance again.

replies(1): >>qwytw+tD
◧◩◪
69. theGnu+Ed[view] [source] [discussion] 2023-11-19 00:14:09
>>two_in+a9
It is really hard to predict anything in this business.
◧◩◪◨
70. Meekro+Fd[view] [source] [discussion] 2023-11-19 00:14:13
>>xcv123+c9
No more than any other weapon. When we talk about "safety in gun design", that's about making sure the gun doesn't accidentally kill someone. "AI safety" seems to be a similar idea -- making sure it doesn't decide on its own to go on a murder spree.
replies(1): >>xcv123+Uv
◧◩◪◨⬒
71. threes+Gd[view] [source] [discussion] 2023-11-19 00:14:14
>>sensei+cb
All robots are servo based.

And there is every reason to believe this is an ML classification issue since similar robots are in widespread use.

◧◩◪
72. sudosy+ke[view] [source] [discussion] 2023-11-19 00:17:25
>>patric+Jb
Is Ilya not a co-founder as well? And I don't think Sam has substantial AI research experience either.
replies(3): >>adastr+ug >>mv4+bk >>zer0c0+Ou
◧◩◪◨⬒
73. IanCal+oe[view] [source] [discussion] 2023-11-19 00:17:36
>>smegge+ha
> we dont make tools to hurt us

We have many cases of creating things that harm us. We tore a hole in the ozone layer, filled things with lead and plastics and are facing upheaval due to climate change.

> they will be aligned with us because they designed such that their motivation will be to serve us.

They won't hurt us, all we asked for is paperclips.

The obvious problem here is how well you get to constrain the output of an intelligence. This is not a simple problem.

replies(1): >>smegge+bA1
◧◩◪◨⬒
74. cthalu+re[view] [source] [discussion] 2023-11-19 00:17:59
>>Apocry+H8
This fear seems to have been largely played up for drama. My understanding of the situation is that at one point they went 'Huh, we could potentially set off a chain reaction here. We should check out if the math adds up on that.'

Then they went off and did the math and quickly found that this wouldn't happen because the amount of energy in play here was order of magnitudes lower than what would be needed for such a thing to occur and went on about their day.

The only reason it's something we talk about is because of the nature of the outcome, not how seriously the physicists were in their fear.

◧◩◪◨
75. smegge+we[view] [source] [discussion] 2023-11-19 00:18:36
>>stale2+z4
It seems silly to me but then I always prefered Asimov positronic robots stories to yet another retelling of the Golem of Prague.

The thing is the cultural Ur narrative embed in the collective subconscious doesnt seem to understand its own stories anymore. God and Adam, the Golem of Prague, Frankensteins Monster none of them are really about AI. Its about our children making their own decisions that we disagree with and seeing it as the end of the world.

AI isnt a child though. AI is a tool. It doesn't have its own motives, it doesn't have emotions , it doesn't have any core drives we don't give to it. Those things are products of us being biological evolved beings that need them to survive and pass on our genes and memes to the next generation. AI doesn't have to find shelter food water air oxygen and so on. We provide all the equivalents when there are any as part of building it and turning it on. It doesn't have a drive to mate and pass on it genes it doesn't have any reproducing is a mater of copying some files no evolution involved checksums hashes and error correcting codes see to that. Ai is simply the next step in the tech tree just another tool a powerful useful one but a tool not a rampaging monster

◧◩◪◨
76. fallin+ze[view] [source] [discussion] 2023-11-19 00:18:59
>>naremu+Vb
Ok then what better signal do you propose should be used to predict success as a CEO?

The fact is that most people can't do what Sam Altman has done at all, so at the very least that past success makes him in one of the few percent of people who have a fighting chance.

◧◩
77. fallin+ef[view] [source] [discussion] 2023-11-19 00:22:19
>>static+77
Who hired those people? The answer to that is either the founders or some chain of people hired by the founders. And hiring is hard. If you're good at hiring the right people and absolutely nothing else on earth, you will be better than 90% of CEOs.
◧◩◪◨
78. s1arti+dg[view] [source] [discussion] 2023-11-19 00:29:16
>>threes+l7
And someone lost their fingers in the garbage disposal. A robot packer is not AI any more than my toilet or a landslide.
◧◩◪◨
79. adastr+ug[view] [source] [discussion] 2023-11-19 00:30:46
>>sudosy+ke
No he was hired early, but not there from the beginning. Elon recruited him after public announcement of funding.
◧◩◪◨
80. s1arti+Hg[view] [source] [discussion] 2023-11-19 00:32:05
>>resour+O5
Truth has very little to do with the safety questions raised by AI.

Factually accurate results also = unsafety. Knowledge = unsafety, free humans = unsafety.

replies(1): >>resour+di
◧◩◪◨⬒
81. cthalu+3h[view] [source] [discussion] 2023-11-19 00:33:48
>>sho_hn+f9
Certainly. I think at current "AI" just enables us to continue making the same bad decisions we were already making, though, albeit at a faster pace. It's existential in that those same bad decisions might lead to existential threats, e.g. climate change, continued inter-nation aggression and warfare, etc., I suppose, but I don't think the majority of the AI safety crowd is worried about the LLMs of today bringing about the end of the world, and talking about ChatGPT in that context is, to me, a misrepresentation of what they are actually most worried about.
◧◩◪◨
82. adastr+rh[view] [source] [discussion] 2023-11-19 00:36:12
>>naremu+Vb
Stock pickers are not the same as CEOs.
◧◩◪
83. smegge+Qh[view] [source] [discussion] 2023-11-19 00:38:55
>>apppli+u1
You mean besides the business experience of already having gone down this path so he can speedrun while everyone else is still trying to find the path?

Easy; his contacts list. He has everyone anyone could want in his contacts list politician tech executives financial backers and a preexisting positive relationship with most of them. When alternative would be entrepreneurs are needing to make a deal with a major company like Microsoft or Google it will be upper middlemanagement and lawyers, a committees or three will weigh in on it present it to their bosses etc. With Sam he calls up the CEO and has few drinks at the golf course and they decide to work with him and they make it happen.

◧◩◪◨
84. adastr+Vh[view] [source] [discussion] 2023-11-19 00:39:10
>>gkober+U4
History is littered with the mistakes of deluded people with more power than ought to have been granted to them.
replies(1): >>sainez+lG
◧◩◪◨⬒
85. resour+di[view] [source] [discussion] 2023-11-19 00:42:03
>>s1arti+Hg
But they (AI folks) keep talking about "safety" all the time. What is their definition of safety then? What are they trying to achieve?
replies(1): >>s1arti+Nn
◧◩◪◨
86. adastr+ki[view] [source] [discussion] 2023-11-19 00:42:56
>>arisAl+35
Not all, or even arguably most AI researchers subscribe to The Big Scary Idea.
replies(1): >>arisAl+6f1
◧◩◪◨
87. wruza+Di[view] [source] [discussion] 2023-11-19 00:44:42
>>threes+w6
This babysitting of the world gets annoying, tbh. As if everyone to lose their mind and start acting illegal only because chatbot said so. There’s something fundamentally wrong with humanity (which isn’t surprising given the history of our species), if that is unsafe. AI is just a source of information, it doesn’t cancel upbringing and education for human values and methods of dealing with information.
◧◩◪◨
88. Michae+Ti[view] [source] [discussion] 2023-11-19 00:47:08
>>naremu+Vb
> Interestingly this is exactly what all financial advice tends to actually warn about rather than encourage, that previous performance does not indicate future performance.

It’s important to put those disclaimers in context though. The rules that mandated them came out before the era of index funds. Those disclaimers are specifically talking about fund managers. And it’s true that past performance at picking stocks does not indicate future performance of picking stocks. Out side of that context, past performance is almost always a strong indicator of future performance.

◧◩◪◨⬒⬓
89. autoex+4j[view] [source] [discussion] 2023-11-19 00:47:53
>>resour+4d
> The internet is not called "AGI"

Nether is anything else in existence. I'm glad that philosophers are worrying about what AGI might one day mean for us but it has nothing to do with anything happening in the world today.

replies(1): >>resour+sm
◧◩◪◨
90. curtis+xj[view] [source] [discussion] 2023-11-19 00:50:26
>>xcv123+c9
That a military AI helps to kill enemies doesn't look particularly "unsafe" to me, at least not more "unsafe" than a fighter jet or an aircraft carrier is; they're all complex systems accurately designed to kill enemies in a controlled way; killing people is the whole point of their existence, not an unwanted side effect. If, on the other hand, a military AI starts autonomously killing civilians, or fighting its "handlers", then I would call it "unsafe", but nobody has ever been able to demonstrate an "unsafe" AI of any kind according to this definition (so far).
replies(1): >>chasd0+eq
◧◩
91. deevia+yj[view] [source] [discussion] 2023-11-19 00:50:32
>>static+77
Because Meta or Google or Apple or Facebook don't recruit the best in the field?

All who are a year plus behind OpenAI.

◧◩◪◨
92. mlyle+Dj[view] [source] [discussion] 2023-11-19 00:50:42
>>naremu+Vb
One key reason past performance cannot be used to predict future returns is because market expectations tend to price in expected future returns. Also, nothing competitive is expected to generate economic profit forever— in the long run things even out. In the long run, firms and stock pickers usually end up with normal profit.

But that doesn’t mean you can’t get some useful ideas about future performance from a person’s past results compared to other humans. There is no such effect in play here.

Otherwise, time for me to go beat Steph Curry in a shooting contest.

Of course there’s other reasons past performance is imperfect as a predictor. Fundamentals can change, or the past performance could have been luck. Maybe Steph’s luck will run out, or maybe this is the day he will get much worse at basketball, and I will easily win.

◧◩◪
93. deevia+Uj[view] [source] [discussion] 2023-11-19 00:52:06
>>spacem+u8
AGI is not ASI.
◧◩◪◨
94. mv4+bk[view] [source] [discussion] 2023-11-19 00:53:44
>>sudosy+ke
Looks like he was hired.

https://www.nytimes.com/2018/04/19/technology/artificial-int...

◧◩
95. mv4+Fk[view] [source] [discussion] 2023-11-19 00:56:42
>>static+77
No, they recruited top talent by provided top pay.

From 2016: https://www.nytimes.com/2018/04/19/technology/artificial-int...

To 2023: https://www.businessinsider.com/openai-recruiters-luring-goo...

◧◩◪◨
96. Amezar+nl[view] [source] [discussion] 2023-11-19 01:01:38
>>threes+w6
Yes, in other words, AI is only safe when it repeats only the ideology of AI safetyists as gospel and can be used only to reinforce the power of the status quo.
replies(1): >>chasd0+3s
◧◩◪
97. MVisse+hm[view] [source] [discussion] 2023-11-19 01:08:56
>>Meekro+44
You should read the safety paper of GPT-4. It can easily manipulate humans to attains it goals.
replies(1): >>mattkr+GH
◧◩◪◨⬒⬓⬔
98. resour+sm[view] [source] [discussion] 2023-11-19 01:09:45
>>autoex+4j
I fully agree with that. But if you read this thread or any other recent HN thread, you will see "AGI... AGI... AGI" as if it's a real thing. The whole openai debacle with firing/rehiring sama revolves around (non-existent) "AGI" and its imaginary safety/unsafety, and if you dare to question this whole narrative, you will get beaten up.
◧◩◪◨⬒⬓
99. s1arti+Nn[view] [source] [discussion] 2023-11-19 01:20:59
>>resour+di
I dont think it has a fixed definition. It is an ambiguous idea that AI will not do or lead to bad things.
◧◩◪
100. chasd0+up[view] [source] [discussion] 2023-11-19 01:35:06
>>Meekro+44
The “safety” they’re talking about isn’t about actual danger but more like responses that don’t comply with the political groupthink de jour.
◧◩◪◨⬒
101. chasd0+eq[view] [source] [discussion] 2023-11-19 01:41:08
>>curtis+xj
> If, on the other hand, a military AI starts autonomously killing civilians, or fighting its "handlers", then I would call it "unsafe"

So is “unsafe” just another word for buggy then?

replies(1): >>curtis+Dj1
◧◩◪
102. chasd0+Zq[view] [source] [discussion] 2023-11-19 01:45:52
>>btown+A3
> must be evaluated extensively for safety before being released to the publIc

JFC someone somewhere define “safety”! Like wtf does it mean in the context of a large language model?

◧◩◪◨⬒
103. chasd0+3s[view] [source] [discussion] 2023-11-19 01:53:40
>>Amezar+nl
Yeah that’s what I thought. This undefined ambiguous use of the word “safety” does real damage to the concept and things that are indeed dangerous and need to be made more safe.
◧◩◪◨
104. zer0c0+Ou[view] [source] [discussion] 2023-11-19 02:09:37
>>sudosy+ke
Elon brought him in, which is quite the irony. Funny even. It also is the reason Elon and Larry Page don’t get along anymore.

Ilya is certainly world class in his field, and maybe good to listen to what he has to say

◧◩◪◨⬒
105. xcv123+Uv[view] [source] [discussion] 2023-11-19 02:16:00
>>Meekro+Fd
Strictly obeying their overlords. Ensuring that we don't end up with Skynet and Terminators.
◧◩
106. zaptre+4w[view] [source] [discussion] 2023-11-19 02:16:37
>>huevos+z1
I don't think anyone has reported an end to scaling laws yet.
◧◩◪
107. macOSC+bw[view] [source] [discussion] 2023-11-19 02:16:53
>>Meekro+44
An Uber self-driving car killed a person.
◧◩◪◨⬒⬓⬔⧯
108. qwytw+tD[view] [source] [discussion] 2023-11-19 03:00:48
>>femiag+nd
> The push for the bomb was an international arms race — America against Russia

Was it? US (and initially UK) didn't really face any real competition at all until the war was already over and they had the bomb. The Soviets then just stole American designs and iterated on top of them.

replies(1): >>femiag+hI
◧◩◪
109. sainez+JF[view] [source] [discussion] 2023-11-19 03:16:44
>>Jensso+J6
There is no world in which Microsoft leaves their GPT4 customers dead in the water.
◧◩◪◨⬒
110. sainez+lG[view] [source] [discussion] 2023-11-19 03:22:27
>>adastr+Vh
And with well-intentioned people who tried to warn people of catastrophes that went unheeded
◧◩◪◨
111. mattkr+GH[view] [source] [discussion] 2023-11-19 03:31:55
>>MVisse+hm
Does it have goals beyond “find a likely series of tokens that extends the input?”

Is the idea that it will hack into NORAD and a launch a first-strike to increase the log-likelihood of “WWIII was begun by…?”

replies(1): >>Davidz+B21
◧◩◪◨⬒⬓⬔⧯▣
112. femiag+hI[view] [source] [discussion] 2023-11-19 03:36:22
>>qwytw+tD
You know that now, with the benefit of history. At the time the fear of someone else developing the bomb first was real, and the Soviet Union knew about the Manhattan project: https://www.atomicarchive.com/history/cold-war/page-9.html.
replies(1): >>qwytw+ty1
◧◩◪◨⬒
113. Gigabl+iT[view] [source] [discussion] 2023-11-19 04:57:16
>>laidof+O6
This applies equally to their detractors.
◧◩
114. meowti+QZ[view] [source] [discussion] 2023-11-19 06:00:45
>>naremu+07
The case for Sam is the success of OpenAI while Sam was CEO. If the status quo is wild success, then keeping the status quo is a good thing.
replies(1): >>Davidz+u01
◧◩◪
115. Davidz+u01[view] [source] [discussion] 2023-11-19 06:07:24
>>meowti+QZ
The company's goal is not your definition of success
◧◩◪
116. int_19+011[view] [source] [discussion] 2023-11-19 06:12:46
>>tempes+O8
How many employees have actually quit?

And how many of them work on the models?

◧◩◪◨⬒
117. Davidz+S11[view] [source] [discussion] 2023-11-19 06:23:09
>>smegge+ha
Can a superintelligence ever be merely a tool?
replies(1): >>smegge+Ax1
◧◩◪◨⬒
118. Davidz+921[view] [source] [discussion] 2023-11-19 06:26:01
>>kspace+59
This sort of prediction is by its nature speculative. The argument is not--or should not--be certain doom. But rather that the uncertainty on outcomes is so large that even extreme tails has nontrivial weight
◧◩◪◨⬒
119. Davidz+B21[view] [source] [discussion] 2023-11-19 06:31:30
>>mattkr+GH
I think this is misguided. There can be goals internal to the system which do not arise from goals of the external system. For example, when simulating a chess game, it (behaves identically to) has a goal of winning the game. This is not a written expressed goal but is emergent. Like the goals of a human are emergent from the biological system which on the cellular level have very different goals
◧◩◪◨⬒
120. arisAl+Je1[view] [source] [discussion] 2023-11-19 08:37:51
>>laidof+O6
So in a vaccum if top experts telling you X is Y and you without being top expert yourself if you had to choose you would chose that they are high and not that you misunderstood something?
replies(1): >>laidof+UNt
◧◩◪
121. camden+Le1[view] [source] [discussion] 2023-11-19 08:38:15
>>spacem+u8
They’ve redefined “safe” in this context to mean “conformant to fashionable academic dogma”
◧◩◪◨⬒
122. arisAl+Ve1[view] [source] [discussion] 2023-11-19 08:39:23
>>smegge+ha
You probably never read I robot from Asimov?
replies(1): >>smegge+bv1
◧◩◪◨⬒
123. arisAl+6f1[view] [source] [discussion] 2023-11-19 08:40:39
>>adastr+ki
Actually the majority of the VA top current. That is Ilya, hassabis, anthropic, Bengio, Hinton. 3 top labs? 3 same views.
◧◩◪◨⬒⬓
124. curtis+Dj1[view] [source] [discussion] 2023-11-19 09:24:41
>>chasd0+eq
Buggy in a way that harms unintended targets, yes.
◧◩◪◨⬒⬓
125. smegge+bv1[view] [source] [discussion] 2023-11-19 11:11:01
>>arisAl+Ve1
On the contrary. I can safely say I have read litterally dozens of his book, both fiction and nonfiction, also have read countless short stories and many of his essays. He is one of my all time favorite writers actually.
replies(1): >>arisAl+Nf2
◧◩◪◨⬒⬓
126. smegge+Ax1[view] [source] [discussion] 2023-11-19 11:38:18
>>Davidz+S11
If it has no motivation and drives of its own yeah why not. AI wont have a "psychology" anything like our own it wont feel pain it wont feel emotions it wont feel biological imparetives. All it will have is its programming /training to do what its been told. Neural nets that dont produce the right outcomes will be trained and reweighted until they do.
◧◩◪◨⬒⬓⬔⧯▣▦
127. qwytw+ty1[view] [source] [discussion] 2023-11-19 11:46:03
>>femiag+hI
Isn't this mainly about what happened after the war and developing then hydrogen bomb? Did anyone seriously believe during WW2 that the Nazis/Soviets could be the first to develop a nuclear weapon (I don't really know to be fair)?
replies(1): >>femiag+zWa
◧◩◪◨⬒⬓
128. smegge+bA1[view] [source] [discussion] 2023-11-19 12:01:42
>>IanCal+oe
Honestly we already have paperclip maximizers they are called corporations. Instead of paperclips they are maximizing for shortterm shareholder value.
◧◩◪◨
129. bob_th+zL1[view] [source] [discussion] 2023-11-19 13:41:34
>>naremu+N9
That is something that Sam Altman did with his own money. And it's fair he's criticized for his choices, but that has nothing to do with his role at Open AI.
◧◩◪◨
130. throwa+r22[view] [source] [discussion] 2023-11-19 15:36:09
>>threes+w6
That's not really a great encapsulation of the AI safety that those who think AGI poses a thread to humanity are referring to.

The bigger concern is something like Paperclip Maximizer. Alignment is about how to ensure that a super intelligence has the right goals.

◧◩◪◨⬒⬓⬔
131. arisAl+Nf2[view] [source] [discussion] 2023-11-19 16:46:09
>>smegge+bv1
and what you got from the I Robot stories is that there is zero probability of danger? Fascinating
replies(1): >>smegge+dz2
◧◩◪◨⬒⬓⬔⧯
132. smegge+dz2[view] [source] [discussion] 2023-11-19 18:07:13
>>arisAl+Nf2
none of the stories in i robot i can remember feature the robots intentionally harming humans/humanity most of them are essentially stories of a few robot technicians trying to debug unexpected behaviors resulting form conflicting directives given to the robots. so yeah. you wouldn't by chance be thinking of that travesty of a movie that shares only a name in common with his book and seemed to completely misrepresent his take on ai?

Thought to be honest in my original post I was more thinking of Asimov's nonfiction essays on the subject. I recommend finding a copy of "Robot Visons" if you can. Its a mixed work of fictional short stories and nonfiction essays including several on the subject of the three laws and on the Frankenstein Complex.

replies(1): >>arisAl+KS4
◧◩◪◨⬒⬓⬔⧯▣
133. arisAl+KS4[view] [source] [discussion] 2023-11-20 08:19:08
>>smegge+dz2
Again "they will be aligned with us because they designed such that their motivation will be to serve us." If you got this outcome from reading I robot either you should reread them because obviously it was decades ago or you build your own safe reality to match your arguments. Usually it's the latter.
replies(1): >>smegge+LV7
◧◩◪◨⬒⬓⬔⧯▣▦
134. smegge+LV7[view] [source] [discussion] 2023-11-20 23:06:33
>>arisAl+KS4
And yet again I didn't get it from I Robot, I got it from Asimov's NON-fiction writing which I referenced in my previous post. Even if it had gotten it based on his fictional works, which again I didn't, the majority of his robot centric novels (caves of steal, naked sun, robots of dawn, robots and empire, prelude to foundation, forward the foundation, second foundation trilogy etc all feature benevolent AIs aligned with humanity.
◧◩◪◨
135. juped+Qna[view] [source] [discussion] 2023-11-21 16:18:57
>>naremu+Vb
No dotcom fortune, just a failed startup that lost its investors money assuming it ever had an expense in its lifetime. OpenAI might in fact be the first time Altman has been in the vicinity of an object-level success; it depends on how you interpret his tenure at YC.
◧◩◪◨⬒⬓⬔⧯▣▦▧
136. femiag+zWa[view] [source] [discussion] 2023-11-21 18:26:57
>>qwytw+ty1
A lot of it happened after the war, but the Nazis had their own nuclear program that was highly infiltrated by the CIA, and whose progress was tracked against. Considering how late Teller's mechanism for detonation was developed, the race against time was real.
◧◩◪◨⬒⬓
137. laidof+UNt[view] [source] [discussion] 2023-11-27 23:54:16
>>arisAl+Je1
Correct, because experts in one domain are not immune to fallacious thinking in an adjacent one. Part of being an expert is communicating to the wider public, and if you sound as grandiose as some of the AI doomers to the layman you've failed already.
[go to top]