zlacker

[return to "We have reached an agreement in principle for Sam to return to OpenAI as CEO"]
1. Satam+0a[view] [source] 2023-11-22 07:05:40
>>staran+(OP)
Disappointing outcome. The process has conclusively confirmed that OpenAI is in fact not open and that it is effectively controlled by Microsoft. Furthermore, the overwhelming groupthink shows there's clearly little critical thinking amongst OpenAI's employees either.

It might not seem like the case right now, but I think the real disruption is just about to begin. OpenAI does not have in its DNA to win, they're too short-sighted and reactive. Big techs will have incredible distribution power but a real disruptor must be brewing somewhere unnoticed, for now.

◧◩
2. polite+Yj[view] [source] 2023-11-22 08:19:38
>>Satam+0a
> there's clearly little critical thinking amongst OpenAI's employees either.

That they reached a different conclusion than the outcome you wished for does not indicate a lack of critical thinking skills. They have a different set of information than you do, and reached a different conclusion.

◧◩◪
3. dimask+vk[view] [source] 2023-11-22 08:24:11
>>polite+Yj
It is not about different set of information, but different stakes/interests. They act firstmost as investors rather than as employees on this.
◧◩◪◨
4. karmas+1m[view] [source] 2023-11-22 08:35:56
>>dimask+vk
Tell me how the board's actions could convince the employees they are making the right move?

Even if they are genuine in believing firing Sam is to keep OpenAI's founding principles, they can't be doing a better job in convincing everyone they are NOT able to execute it.

OpenAI has some of the smartest human beings on this planet, saying they don't think critically just because they don't vote with what you agree is reaching reaching.

◧◩◪◨⬒
5. kortil+rs[view] [source] 2023-11-22 09:29:31
>>karmas+1m
> OpenAI has some of the smartest human beings on this planet

Being an expert in one particular field (AI) not mean you are good at critical thinking or thinking about strategic corporate politics.

Deep experts are some of the easier con targets because they suffer from an internal version of “appealing to false authority”.

◧◩◪◨⬒⬓
6. alsodu+Zt[view] [source] 2023-11-22 09:42:11
>>kortil+rs
I hate these comments that potray as if every expert/scientist is just good at one thing and aren't particularly great at critical thinking/corporate politics.

Heck, there are 700 of them. All different humans, good at something, bad at some other things. But they are smart. And of course a good chunk of them would be good at corporate politics too.

◧◩◪◨⬒⬓⬔
7. _djo_+dv[view] [source] 2023-11-22 09:53:32
>>alsodu+Zt
I don't think the argument was that none of them are good at that, just that it's a mistake to assume that just because they're all very smart in this particular field that they're great at another.
◧◩◪◨⬒⬓⬔⧯
8. karmas+yv[view] [source] 2023-11-22 09:57:54
>>_djo_+dv
I don't think critical thinking can be defined as joining the minority party.
◧◩◪◨⬒⬓⬔⧯▣
9. Frustr+sI[view] [source] 2023-11-22 11:51:51
>>karmas+yv
Can't critical thinking also include : "I'm about to get a 10mil pay day, hmmm, this is crazy situation, let me think critically how to ride this out and still get the 10mil so my kids can go to college and I don't have to work until I'm 75".
◧◩◪◨⬒⬓⬔⧯▣▦
10. golden+wK[view] [source] 2023-11-22 12:05:45
>>Frustr+sI
Anyone with enough critical thought and understands the hard consciousness problem's true answer (consciousness is the universe evaluating if statements) and where the universe is heading physically (nested complexity), should be seeking something more ceremonious. With AI, we have the power to become eternal in this lifetime, battle aliens, and shape this universe. Seems pretty silly to trade that for temporary security. How boring.
◧◩◪◨⬒⬓⬔⧯▣▦▧
11. WJW+wL[view] [source] 2023-11-22 12:14:18
>>golden+wK
I would expect that actual AI researchers understand that you cannot break the laws of physics just by thinking better. Especially not with ever better LLMs, which are fundamentally in the business of regurgitating things we already know in different combinations rather than inventing new things.

You seem to be equating AI with magic, which it is very much not.

◧◩◪◨⬒⬓⬔⧯▣▦▧▨
12. golden+bh1[view] [source] 2023-11-22 15:02:01
>>WJW+wL
LLMs are able to do complex logic within the world of words. It is a a smaller matrix than our world but fueled by the same chaotic symmetries of our universe. I would not underestimate logic, even when not given adequate data.
◧◩◪◨⬒⬓⬔⧯▣▦▧▨◲
13. WJW+jy1[view] [source] 2023-11-22 16:18:01
>>golden+bh1
You can make it sound as esoteric as you want, but in the end an AI will still be bound by the laws of physics. Being infinitely smart will not help with that.

I don't think you understand logic very well btw if you wish to suggest that you can reach valid conclusions from inadequate axioms.

◧◩◪◨⬒⬓⬔⧯▣▦▧▨◲◳
14. golden+uy1[view] [source] 2023-11-22 16:18:51
>>WJW+jy1
Axioms are constraints as much as they might look like guidance. We live in a neuromorphic computer. Logic explores this, even with few axioms. With fewer axioms, it will be less constrained.
[go to top]