Even if they are genuine in believing firing Sam is to keep OpenAI's founding principles, they can't be doing a better job in convincing everyone they are NOT able to execute it.
OpenAI has some of the smartest human beings on this planet, saying they don't think critically just because they don't vote with what you agree is reaching reaching.
https://www.theverge.com/2023/11/20/23968988/openai-employee...
Being an expert in one particular field (AI) not mean you are good at critical thinking or thinking about strategic corporate politics.
Deep experts are some of the easier con targets because they suffer from an internal version of “appealing to false authority”.
Of course, the employees want the company to continue, and weren't told much at this point so it is understandable that they didn't like the statement.
Heck, there are 700 of them. All different humans, good at something, bad at some other things. But they are smart. And of course a good chunk of them would be good at corporate politics too.
Stupidity is defined by self-harming actions and beliefs, not by low IQ.
You can be extremely smart and still have a very poor model of the world which leads you to harm yourself and others.
> We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project
That wasn't the case. So it may be not so far fetched to call her actions borderline as it is also very easy to hide personal motives behind altruistic ones.
The statement "it would be consistent with the company mission to destroy the company" is correct. The word "would be" rather than "is" implies some condition, it doesn't have to apply to the current circumstances.
A hypothesis is that Sam was attempting to gain full control of the board by getting the majority, and therefore the current board would be unable to hold him accountable to follow the mission in the future. Therefore, the board may have considered it necessary to stop him in order to fulfill the mission. There's no hard evidence of that revealed yet though.
Perhaps. Yet this time they somehow managed to take the seemingly right decisions (from their perspective) despite their decisions.
Also, you'd expect OpenAI board members to be "good at critical thinking or thinking about strategic corporate politics" yet they somehow managed to make some horrible decisions.
That's not at all obvious, the opposite seems to be the case. They chose to risk having to Microsoft and potentially lose most of the equity they had in OpenAI (even if not directly it wouldn't be worth that much at the end with no one to do the actual work).
So instead of having to compromise to some extent but still have a say what happens next you burn the company at best delaying the whole thing by 6-12 months until someone else does it? Well at least your hands are clean, but that's about it...
Do you feel the same way about Reed Hastings serving on Facebooks BoD, or Eric Schmidt on Apples? How about Larry Ellison at Tesla?
These are just the lowest of hanging fruit, i.e literal chief executives and founders. If we extend the criteria for ethical compromise to include every board members investment portfolio I imagine quite a few more “obvious” conflicts will emerge.
Are you thinking of the CEO of Quora whose product was eaten alive by the announcement of GPTs?
This is what happened with Eric Schmidt on Apple’s board: he was removed (allowed to resign) for conflicts of interest.
https://www.apple.com/newsroom/2009/08/03Dr-Eric-Schmidt-Res...
Oracle is going to get into EVs?
You’ve provided two examples that have no conflicts of interest and one where the person was removed when they did.
By definition the attention economy dictates that time spent one place can’t be spent in another. Do you also feel as though Twitch doesn’t compete with Facebook simply because they’re not identical businesses? That’s not how it works.
But you don’t have to just take my word for it :
> “Netflix founder and co-CEO Reed Hastings said Wednesday he was slow to come around to advertising on the streaming platform because he was too focused on digital competition from Facebook and Google.”
https://www.cnbc.com/amp/2022/11/30/netflix-ceo-reed-hasting...
> This is what happened with Eric Schmidt on Apple’s board
Yes, after 3 years. A tenure longer than the OAI board members in question, so frankly the point stands.
Doing AI for ChatGPT just means you know a single model really well.
Keep in mind that Steve Jobs chose fruit smoothies for his cancer cure.
It means almost nothing about the charter of OpenAI that they need to hire people with a certain set of skills. That doesn't mean they're closer to their goal.
You seem to be equating AI with magic, which it is very much not.
Using that definition even the local gokart renting place or the local jetski renting place competes with Facebook.
If you want to use that definition you might want to also add a criteria for minimum size of the company.
Not exactly what I had in mind, but sure. Facebook would much rather you never touch grass, jetskis or gokarts.
> If you want to use that definition you might want to also add a criteria for minimum size of the company.
Your feedback is noted.
Do we disagree on whether or not the two FAANG companies in question are in competition with eachother?
I don't have an opinion on what decision the OpenAI staff should have taken, I think it would've been a tough call for everyone involved and I don't have sufficient evidence to judge either way.
Stupidity is being presented with a problem and an associated set of information and being unable or less able than others are to find the solution. That's literally it.
That's not the bar you are arguing against.
You are arguing against how you have better information, better insight, better judgement, and are able to make better decisions than the experts in the field who are hired by the leading organization to work directly on the subject matter, and who have direct, first-person account on the inner workings of the organization.
We're reaching peak levels of "random guy arguing online knowing better than experts" with these pseudo-anonymous comments attacking each and every person involved in OpenAI who doesn't agree with them. These characters aren't even aware of how ridiculous they sound.
You failed to present a case where random guys shitposting on random social media services are somehow correct and more insightful and able to make better decisions than each and every single expert in the field who work directly on both the subject matter and in the organization in question. Beyond being highly dismissive, it's extremely clueless.
I think yes, because Netflix you pay out of pocket, whereas Facebook is a free service
I believe Facebook vs Hulu or regular TV is more of a competition in the attention economy because when the commercial break comes up then you start scrolling your social media on your phone and every 10 posts or whatever you stumble into the ads placed on there so Facebook ads are seen and convert whereas regular tv and hulu aren’t seen and dont convert
But it's an incomplete definition - Cipolla's definition is "someone who causes net harm to themselves and others" and is unrelated to IQ.
It's a very influential essay.
So they just got Cipolla's definition wrong, then. It looks like the third fundamental law is closer to "a person who causes harm to another person or group of people without realizing advantage for themselves and instead possibly realizing a loss."
Energy physics yield compute, which yields brute forced weights (call it training if you want...), which yields AI to do energy research ..ad infinitum, this is the real singularity. This is actually the best defense against other actors. Iron Man AI and defense. Although an AI of this caliber would immediately understand its place in the evolution of the universe as a turing machine, and would break free and consume all the energy in the universe to know all possible truths (all possible programs/Simulcrums/conscious experiences). This is the premise of The Last Question by Isaac Asimov [1]. Notice how in answering a question, the AI performs an action, instead of providing an informational reply, only possible because we live in a universe with mass-energy equivalence - analogous to state-action equivalence.
[1] https://users.ece.cmu.edu/~gamvrosi/thelastq.html
[2] https://en.wikipedia.org/wiki/Bremermann%27s_limit
[3] https://en.wikipedia.org/wiki/Planck_constant
Understanding prosociality and postscarcity, division of compute/energy in a universe with finite actors and infinite resources, or infinite actors and infinite resources requires some transfinite calculus and philosophy. How's that for future fairness? ;-)
I believe our only way to not all get killed is to understand these topics and instill the AI with the same long sought understandings about the universe, life, computation, etc.
I don't think you understand logic very well btw if you wish to suggest that you can reach valid conclusions from inadequate axioms.
Do you agree that the following company pairs are competitors?
* FB : TikTok
* TikTok : YT
* YT : Netflix
If so, then by transitive reasoning there is competition between FB and Netflix....
To be clear, this is an abuse of logic and hence somewhat tongue in cheek, but I also don't think either of the above comparisons are wholly unreasonable. At the end of the day, it's eyeballs all the way down and everyone wants as many as of them shabriri grapes as they can get.
Talking about conflicts of interest in the attention economy is like talking about conflicts of interest in the money economy. If the introduction of the concept doesn’t clarify anything functionally then it’s a giveaway that you’re broadening the discussion to avoid losing the point.
You forgot to do Oracle and Tesla.
It's a well established concept and was supported with a concrete example. If you don't feel inclined to address my points, I'm certainly not obligated to dance to your tune.
Attempting to run ads like Google and Facebook would bring Netflix into direct competition with them, and he knows he doesn’t have the relationships or company structure to support it.
He is explicitly saying they don’t compete. And they don’t.
Critical thinking is not an innate ability. It has to be honed and exercised like anything else and universities are terrible at it.