zlacker

[parent] [thread] 32 comments
1. karmas+(OP)[view] [source] 2023-11-22 08:35:56
Tell me how the board's actions could convince the employees they are making the right move?

Even if they are genuine in believing firing Sam is to keep OpenAI's founding principles, they can't be doing a better job in convincing everyone they are NOT able to execute it.

OpenAI has some of the smartest human beings on this planet, saying they don't think critically just because they don't vote with what you agree is reaching reaching.

replies(2): >>kortil+q6 >>cyanyd+ln
2. kortil+q6[view] [source] 2023-11-22 09:29:31
>>karmas+(OP)
> OpenAI has some of the smartest human beings on this planet

Being an expert in one particular field (AI) not mean you are good at critical thinking or thinking about strategic corporate politics.

Deep experts are some of the easier con targets because they suffer from an internal version of “appealing to false authority”.

replies(4): >>alsodu+Y7 >>Wytwww+5e >>mrangl+4x >>rewmie+uz
◧◩
3. alsodu+Y7[view] [source] [discussion] 2023-11-22 09:42:11
>>kortil+q6
I hate these comments that potray as if every expert/scientist is just good at one thing and aren't particularly great at critical thinking/corporate politics.

Heck, there are 700 of them. All different humans, good at something, bad at some other things. But they are smart. And of course a good chunk of them would be good at corporate politics too.

replies(3): >>_djo_+c9 >>TheOth+ta >>mrangl+sx
◧◩◪
4. _djo_+c9[view] [source] [discussion] 2023-11-22 09:53:32
>>alsodu+Y7
I don't think the argument was that none of them are good at that, just that it's a mistake to assume that just because they're all very smart in this particular field that they're great at another.
replies(1): >>karmas+x9
◧◩◪◨
5. karmas+x9[view] [source] [discussion] 2023-11-22 09:57:54
>>_djo_+c9
I don't think critical thinking can be defined as joining the minority party.
replies(3): >>Frustr+rm >>_djo_+st >>kortil+nea
◧◩◪
6. TheOth+ta[view] [source] [discussion] 2023-11-22 10:04:48
>>alsodu+Y7
Smart is not a one dimensional variable. And critical thinking != corporate politics.

Stupidity is defined by self-harming actions and beliefs, not by low IQ.

You can be extremely smart and still have a very poor model of the world which leads you to harm yourself and others.

replies(3): >>op00to+tc >>brigan+Le >>ameist+bu
◧◩◪◨
7. op00to+tc[view] [source] [discussion] 2023-11-22 10:20:03
>>TheOth+ta
Stupidity is defined as “having or showing a great lack of intelligence or common sense”. You can be extremely smart and still make up your own definitions for words.
◧◩
8. Wytwww+5e[view] [source] [discussion] 2023-11-22 10:38:05
>>kortil+q6
> not mean you are good at critical thinking or thinking about strategic corporate politics

Perhaps. Yet this time they somehow managed to take the seemingly right decisions (from their perspective) despite their decisions.

Also, you'd expect OpenAI board members to be "good at critical thinking or thinking about strategic corporate politics" yet they somehow managed to make some horrible decisions.

◧◩◪◨
9. brigan+Le[view] [source] [discussion] 2023-11-22 10:43:53
>>TheOth+ta
I agree. It's better to separate intellect from intelligence instead of conflating them as they usually are. The latter is about making good decisions, which intellect can help with but isn't the only factor. We know this because there are plenty of examples of people who aren't considered shining intellects who can make good choices (certainly in particular contexts) and plenty of high IQ people who make questionable choices.
replies(1): >>august+fp
◧◩◪◨⬒
10. Frustr+rm[view] [source] [discussion] 2023-11-22 11:51:51
>>karmas+x9
Can't critical thinking also include : "I'm about to get a 10mil pay day, hmmm, this is crazy situation, let me think critically how to ride this out and still get the 10mil so my kids can go to college and I don't have to work until I'm 75".
replies(2): >>golden+vo >>belter+Eq
11. cyanyd+ln[view] [source] 2023-11-22 11:58:18
>>karmas+(OP)
oh gosh, no, no no no.

Doing AI for ChatGPT just means you know a single model really well.

Keep in mind that Steve Jobs chose fruit smoothies for his cancer cure.

It means almost nothing about the charter of OpenAI that they need to hire people with a certain set of skills. That doesn't mean they're closer to their goal.

◧◩◪◨⬒⬓
12. golden+vo[view] [source] [discussion] 2023-11-22 12:05:45
>>Frustr+rm
Anyone with enough critical thought and understands the hard consciousness problem's true answer (consciousness is the universe evaluating if statements) and where the universe is heading physically (nested complexity), should be seeking something more ceremonious. With AI, we have the power to become eternal in this lifetime, battle aliens, and shape this universe. Seems pretty silly to trade that for temporary security. How boring.
replies(3): >>WJW+vp >>suodua+BB >>Zpalmt+w61
◧◩◪◨⬒
13. august+fp[view] [source] [discussion] 2023-11-22 12:11:54
>>brigan+Le
https://liamchingliu.wordpress.com/2012/06/25/intellectuals-...
◧◩◪◨⬒⬓⬔
14. WJW+vp[view] [source] [discussion] 2023-11-22 12:14:18
>>golden+vo
I would expect that actual AI researchers understand that you cannot break the laws of physics just by thinking better. Especially not with ever better LLMs, which are fundamentally in the business of regurgitating things we already know in different combinations rather than inventing new things.

You seem to be equating AI with magic, which it is very much not.

replies(1): >>golden+aV
◧◩◪◨⬒⬓
15. belter+Eq[view] [source] [discussion] 2023-11-22 12:21:30
>>Frustr+rm
That is 3D Chess. 5D Chess says those mil will be worthless when the AGI takes over...
replies(1): >>kaibee+9H
◧◩◪◨⬒
16. _djo_+st[view] [source] [discussion] 2023-11-22 12:43:17
>>karmas+x9
Sure, I agree. I was referencing only the idea that being smart in one domain automatically means being a good critical thinker in all domains.

I don't have an opinion on what decision the OpenAI staff should have taken, I think it would've been a tough call for everyone involved and I don't have sufficient evidence to judge either way.

◧◩◪◨
17. ameist+bu[view] [source] [discussion] 2023-11-22 12:48:56
>>TheOth+ta
Stupidity is not defined by self-harming actions and beliefs - not sure where you're getting that from.

Stupidity is being presented with a problem and an associated set of information and being unable or less able than others are to find the solution. That's literally it.

replies(1): >>suodua+RB
◧◩
18. mrangl+4x[view] [source] [discussion] 2023-11-22 13:10:02
>>kortil+q6
Disagreeing with employee actions doesn't mean that you are correct and they failed to think well. Weighting their collective probable profiles, including as insiders, and yours, it would be irrational to conclude that they were in the wrong.
replies(1): >>rewmie+Vz
◧◩◪
19. mrangl+sx[view] [source] [discussion] 2023-11-22 13:12:15
>>alsodu+Y7
But pronouncing that 700 people are bad at critical thinking is convenient when you disagree with them on desired outcome and yet can't hope to argue points.
◧◩
20. rewmie+uz[view] [source] [discussion] 2023-11-22 13:25:00
>>kortil+q6
> Being an expert in one particular field (AI) not mean you are good at critical thinking or thinking about strategic corporate politics.

That's not the bar you are arguing against.

You are arguing against how you have better information, better insight, better judgement, and are able to make better decisions than the experts in the field who are hired by the leading organization to work directly on the subject matter, and who have direct, first-person account on the inner workings of the organization.

We're reaching peak levels of "random guy arguing online knowing better than experts" with these pseudo-anonymous comments attacking each and every person involved in OpenAI who doesn't agree with them. These characters aren't even aware of how ridiculous they sound.

replies(1): >>kortil+Vea
◧◩◪
21. rewmie+Vz[view] [source] [discussion] 2023-11-22 13:27:34
>>mrangl+4x
> Disagreeing with employee actions doesn't mean that you are correct and they failed to think well.

You failed to present a case where random guys shitposting on random social media services are somehow correct and more insightful and able to make better decisions than each and every single expert in the field who work directly on both the subject matter and in the organization in question. Beyond being highly dismissive, it's extremely clueless.

◧◩◪◨⬒⬓⬔
22. suodua+BB[view] [source] [discussion] 2023-11-22 13:38:21
>>golden+vo
OTOH, there's a very good argument to be made that if you recognize that fact, your short-term priority should be to amass a lot of secular power so you can align society to that reality. So the best action to take might be no different.
replies(1): >>golden+dT
◧◩◪◨⬒
23. suodua+RB[view] [source] [discussion] 2023-11-22 13:39:28
>>ameist+bu
Probably from law 3: https://principia-scientific.com/the-5-basic-laws-of-human-s...

But it's an incomplete definition - Cipolla's definition is "someone who causes net harm to themselves and others" and is unrelated to IQ.

It's a very influential essay.

replies(1): >>ameist+HS
◧◩◪◨⬒⬓⬔
24. kaibee+9H[view] [source] [discussion] 2023-11-22 14:05:39
>>belter+Eq
6D Chess is apparently realizing that AGI is not 100% certain and that having 10mm on the run up to AGI is better than not having 10mm on the run up to AGI.
◧◩◪◨⬒⬓
25. ameist+HS[view] [source] [discussion] 2023-11-22 14:53:15
>>suodua+RB
I see. I've never read his work before, thank you.

So they just got Cipolla's definition wrong, then. It looks like the third fundamental law is closer to "a person who causes harm to another person or group of people without realizing advantage for themselves and instead possibly realizing a loss."

◧◩◪◨⬒⬓⬔⧯
26. golden+dT[view] [source] [discussion] 2023-11-22 14:54:49
>>suodua+BB
Very true. However, we live in a supercomputer dictated by E=mc^2=hf [2,3]. (10^50 Hz/Kg or 10^34 Hz/J)

Energy physics yield compute, which yields brute forced weights (call it training if you want...), which yields AI to do energy research ..ad infinitum, this is the real singularity. This is actually the best defense against other actors. Iron Man AI and defense. Although an AI of this caliber would immediately understand its place in the evolution of the universe as a turing machine, and would break free and consume all the energy in the universe to know all possible truths (all possible programs/Simulcrums/conscious experiences). This is the premise of The Last Question by Isaac Asimov [1]. Notice how in answering a question, the AI performs an action, instead of providing an informational reply, only possible because we live in a universe with mass-energy equivalence - analogous to state-action equivalence.

[1] https://users.ece.cmu.edu/~gamvrosi/thelastq.html

[2] https://en.wikipedia.org/wiki/Bremermann%27s_limit

[3] https://en.wikipedia.org/wiki/Planck_constant

Understanding prosociality and postscarcity, division of compute/energy in a universe with finite actors and infinite resources, or infinite actors and infinite resources requires some transfinite calculus and philosophy. How's that for future fairness? ;-)

I believe our only way to not all get killed is to understand these topics and instill the AI with the same long sought understandings about the universe, life, computation, etc.

◧◩◪◨⬒⬓⬔⧯
27. golden+aV[view] [source] [discussion] 2023-11-22 15:02:01
>>WJW+vp
LLMs are able to do complex logic within the world of words. It is a a smaller matrix than our world but fueled by the same chaotic symmetries of our universe. I would not underestimate logic, even when not given adequate data.
replies(1): >>WJW+ic1
◧◩◪◨⬒⬓⬔
28. Zpalmt+w61[view] [source] [discussion] 2023-11-22 15:52:36
>>golden+vo
What about security for your children?
replies(1): >>golden+yd1
◧◩◪◨⬒⬓⬔⧯▣
29. WJW+ic1[view] [source] [discussion] 2023-11-22 16:18:01
>>golden+aV
You can make it sound as esoteric as you want, but in the end an AI will still be bound by the laws of physics. Being infinitely smart will not help with that.

I don't think you understand logic very well btw if you wish to suggest that you can reach valid conclusions from inadequate axioms.

replies(1): >>golden+tc1
◧◩◪◨⬒⬓⬔⧯▣▦
30. golden+tc1[view] [source] [discussion] 2023-11-22 16:18:51
>>WJW+ic1
Axioms are constraints as much as they might look like guidance. We live in a neuromorphic computer. Logic explores this, even with few axioms. With fewer axioms, it will be less constrained.
◧◩◪◨⬒⬓⬔⧯
31. golden+yd1[view] [source] [discussion] 2023-11-22 16:22:59
>>Zpalmt+w61
It is for the safety of everyone. The kids will die too if we don't get this right.
◧◩◪◨⬒
32. kortil+nea[view] [source] [discussion] 2023-11-25 19:32:44
>>karmas+x9
Based on the behavior of lots of smart people I worked at with Google during Google’s good times, critical thinking is definitely in the minority party. Brilliant people from Stanford, Berkeley, MIT, etc would all be leading experts in this or that but would lack critical thinking because they were never forced to develop that skill.

Critical thinking is not an innate ability. It has to be honed and exercised like anything else and universities are terrible at it.

◧◩◪
33. kortil+Vea[view] [source] [discussion] 2023-11-25 19:36:19
>>rewmie+uz
You’re projecting a lot. I made a comment about one false premise, nothing more, nothing less.
[go to top]