Sam Altman spoke at an APEC panel on behalf of OpenAI literally yesterday: https://twitter.com/LondonBreed/status/1725318771454456208
"OpenAI is governed by the board of the OpenAI Nonprofit, comprised of OpenAI Global, LLC employees Greg Brockman (Chairman & President), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Tasha McCauley, Helen Toner." [1]
https://www.themarysue.com/annie-altmans-abuse-allegations-a...
After four and a half intense and wonderful years as CEO of Groupon, I've decided that I'd like to spend more time with my family. Just kidding – I was fired today. If you're wondering why ... you haven't been paying attention.
https://www.theguardian.com/technology/blog/2013/mar/01/grou...
[0]: https://x.com/phuckfilosophy/status/1710371830043939122
It's a corollary to my theory that anybody that maintains close ties with their family and lives with them is a wholesome person.
But.. what are the responsibilities of the board that may be hindered? I studied https://openai.com/our-structure
One tantalising statement in there is that AGI-level system is not bound by licensing agreements that a sub-AGI system would be (ostensibly to Microsoft).
This phase-shift places a pressure on management to not declare reaching a AGI level threshold. But have they?
Of course, it could be an ordinary everyday scandal but given how well they are doing, I'd imagine censure/sanctions would be how that is handled.
I so hate to do this, but for those who are comfortable viewing HN in an incognito window, it will be much faster that way. (Edit: this comment originally said to log out, but an incognito window is better because then you don't have to log back in again. Original comment: logging in and out: HN gets a lot faster if you log out, and it will reduce the load on the server if you do. Make sure you can log back in later! or if you run into trouble, email hn@ycombinator.com and I'll help)
I've also turned pagination down to a smaller size, so if you want to read the entire thread, you'll need to click "More" at the bottom, or like this:
https://news.ycombinator.com/item?id=38309611&p=2
https://news.ycombinator.com/item?id=38309611&p=3
https://news.ycombinator.com/item?id=38309611&p=4
https://news.ycombinator.com/item?id=38309611&p=5
Sorry! Performance improvements are inching closer...
https://twitter.com/phuckfilosophy/status/163570439893983232...
https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...
On one hand, OpenAI is completely (financially) premised on the belief that AGI will change everything, 100x return, etc. but then why did they give up so much control/equity to Microsoft for their money?
Sam finally recently admitted that for OpenAI to achieve AGI they "need another breakthrough," so my guess it's this lie that cost him his sandcastle. I know as a researcher than OpenAI and Sam specifically were lying about AGI.
Screenshot of Sam's quote RE needing another breakthrough for AGI: https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_pr... source: https://garymarcus.substack.com/p/has-sam-altman-gone-full-g...
Flagged HN thread: >>37785072
When I googled his name I saw the same cached text show up.
EDIT: As a few have pointed out, this looks like text from a tweet he quoted, and it's incorrectly showing as the description under his google search result.
This from 2021? >>37785072
Bad if true, but highly unlikely that it is.
And possibly related the pause of ChatGPT Plus sign-ups due to capacity problemns (which is all Azure afaik).
It doesn't looks like he has a hint about this:
> I am super excited. I can't imagine anything more exciting to work on.
https://manifold.markets/Ernie/what-will-sam-altman-be-doing...
And this tag contains all the markets about him https://manifold.markets/browse?topic=sam-altman
Will he end up at Grok? Why was he fired? etc.
Some years go by, and AGI progresses to assault man
Atop a pile of paper clips he screams "It's not my fault, man!"
But Eliezer's long since dead, and cannot hear Sam Altman.
--
Scott Alexander
[0] https://www.youtube.com/live/U9mJuUkhUzk?si=dyXBxi9nz6MocLKO
Just something he retweeted long ago
I don't know about the Skynet because it has happened 26 years before [1] but I imagine NSA, the Military, and other government agencies approached the company.
[1] https://en.wikipedia.org/wiki/Terminator_2:_Judgment_Day
Edit: I didn't even know he molested his sister when I wrote my post: https://twitter.com/phuckfilosophy/status/163570439893983232...
He confirmed it verbally as well in his May 2023 hearing in Congress https://twitter.com/thesamparr/status/1658554712151433219?la...
Your use of "crazy abuse allegations" is strange to me as well. I hardly see any of her allegations as being "crazy".
Here's a collection of things she's said about the abuse.
https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...
It was just posted but was filmed on November 1st.
As far as whether this might be the cause, one possible scenario: the board hired a law firm to investigate, Sam made statements that were contradicted by credible evidence, and that was the fireable event. Brockman could have helped cover this up. Again, not saying that this is what happened but it's plausible.
BTW Rubin's $90M payout a) caused a shitstorm at Google b) was determined in part by David Drummond, later fired in part due to sexual misconduct. I would not use this as a representative example, especially since Google now has a policy against such payouts: https://www.cbsnews.com/news/andy-rubin-google-settlement-se...
> Many critics have called Worldcoin's business—of scanning eyeballs in exchange for crypto—dystopian and some have compared it to bribery.
https://time.com/6300522/worldcoin-sam-altman/
> market makers control 95% of the total circulating supply at launch, leading to an initial market imbalance.
https://beincrypto.com/worldcoin-wld-privacy-risk/
> Worldcoin’s use of biometric data, which is unusual in crypto, raises the stakes for regulators. Multiple agencies expressed safety concerns amid reports of the sale of Worldcoin digital identities, known as World IDs, on virtual black markets, the ability to create and profit off of fake IDs, as well as the theft of credentials for operators who sign up new users.
https://www.bloomberg.com/news/newsletters/2023-08-23/worldc...
Lots more signups recently + OpenAI losing $X for each user = Accelerating losses the board wasn't aware of ?
Given the sudden shift in billing terms that is quite possible.
https://twitter.com/phuckfilosophy/status/163570439893983232...
Dude, where have you been for the past decade?
> Andy Rubin got a $90M severance payout from Google after running a sex-slave dungeon on his personal time.
And hence the colossal blowback caused by that means it ain't ever happening again. Just 2 months ago a tech CEO was forced to resign immediately for egregious conduct, losing 100+ million in the process: https://nypost.com/2023/09/20/cs-disco-ceo-kiwi-camara-loses...
Nov 6 - OpenAI devday, with new features of build-your-own ChatGPT and more
Nov 9 - Microsoft cuts employees off from ChatGPT due to "security concerns" [0]
Nov 9 - OpenAI experiences severe downtime the company attributes to a "DDoS" (not the correct term for 'excess usage') [3]
Nov 15 - OpenAI announce no new ChatGPT plus upgrades [1] but still allow regular signups (and still do)
Nov 17 - OpenAI fire Altman
Put the threads together - one theory: the new release had a serious security issue, leaked a bunch of data, and it wasn't disclosed, but Microsoft knew about it.
This wouldn't be the first time - in March there was an incident where users were seeing the private chats of other users [2]
Further extending theory - prioritizing getting to market overrode security/privacy testing, and this most recent release caused something much, much larger.
Further: CTO Mira / others internally concerned about launch etc. but overruled by CEO. Kicks issue up to board, hence their trust in her taking over as interim CEO.
edit: added note on DDoS (thanks kristjansson below) - and despite the downtime it was only upgrades to ChatGPT Plus with the new features that were disabled. Note on why CTO would take over.
[0] https://www.cnbc.com/2023/11/09/microsoft-restricts-employee...
[1] https://twitter.com/sama/status/1724626002595471740
[2] https://www.theverge.com/2023/3/21/23649806/chatgpt-chat-his...
[3] https://techcrunch.com/2023/11/09/openai-blames-ddos-attack-...
Third, the board remains majority independent. Independent directors do not hold equity in OpenAI. Even OpenAI’s CEO, Sam Altman, does not hold equity directly. His only interest is indirectly through a Y Combinator investment fund that made a small investment in OpenAI before he was full-time.
I sincerely hope this is about the man and not the AI.> Even OpenAI’s CEO, Sam Altman, does not hold equity directly. His only interest is indirectly through a Y Combinator investment fund that made a small investment in OpenAI before he was full-time.
That word “directly” seems to be relevant here.
https://x.com/ericschmidt/status/1725625144519909648?s=20
Sam Altman is a hero of mine. He built a company from nothing to $90 Billion in value, and changed our collective world forever. I can't wait to see what he does next. I, and billions of people, will benefit from his future work- it's going to be simply incredible. Thank you @sama for all you have done for all of us.
Making such a statement before knowing what happened, or, maybe he does know what happened, make this seem it might not be as bad as we think?
> OpenAI was founded as a non-profit in 2015 with the core mission of ensuring that artificial general intelligence benefits all of humanity. In 2019, OpenAI restructured to ensure that the company could raise capital in pursuit of this mission, while preserving the nonprofit's mission, governance, and oversight. The majority of the board is independent, and the independent directors do not hold equity in OpenAI. While the company has experienced dramatic growth, it remains the fundamental governance responsibility of the board to advance OpenAI’s mission and preserve the principles of its Charter.
This prompted me to actually read up on the charter: https://openai.com/charter
At least one of them must jointly make this decision with the three outside board members. I’d say it’s more likely to be business related. (In addition, the CTO is appointed as the interim CEO.) (Edit: But obviously we currently don’t really know. I think the whistleblower theory below is possible too.)
The announcement: https://openai.com/blog/openai-announces-leadership-transiti...
“OpenAI’s board of directors consists of OpenAI chief scientist Ilya Sutskever, independent directors Quora CEO Adam D’Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology’s Helen Toner. …..
As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.“
Previous members: https://openai.com/our-structure
“Our board OpenAI is governed by the board of the OpenAI Nonprofit, comprised of OpenAI Global, LLC employees Greg Brockman (Chairman & President), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Tasha McCauley, Helen Toner.”
"Second, because the board is still the board of a Nonprofit, each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial. While the for-profit subsidiary is permitted to make and distribute profit, it is subject to this mission. The Nonprofit’s principal beneficiary is humanity, not OpenAI investors."
So, if I were to speculate, it was because they were at odds over profit/non-profit nature of the future of OpenAI.
Sexual abuse by Sam when she was four years old and he 13.
Develops PCOS (which has seen some association with child abuse) and childhood OCD and depression. Thrown out. Begins working as sex worker for survival. It's a real grim story.
"Like 4 times now in the history of OpenAI, the most recent time was just in the last couple of weeks, I've gotten to be in the room when we sort of like, pushed the veil of ignorance back and the frontier of discovery forward. And getting to do that is like the professional honor of a lifetime".
https://www.youtube.com/watch?v=ZFFvqRemDv8#t=13m22s
This is going to sound terrible, but I really hope this is a financial or ethical scandal about Sam Altman personally and he did something terribly wrong, because the alternative is that this is about how close we are to true AGI.
Superhuman intelligence could be a wonderful thing if done right, but the world is not ready for a fast take-off, and the governance structure of OpenAI certainly wouldn't be ready for it either it seems.
[0]: https://uk.pcmag.com/ai/149685/discord-is-shutting-down-its-...
He's also said very recently that to get to AGI "we need another breakthrough" (source https://garymarcus.substack.com/p/has-sam-altman-gone-full-g... )
To predicate a company so massive as OpenAI on a premise that you know to not be true seems like a big enough lie.
https://twitter.com/phuckfilosophy/status/163570439893983232...
EDIT:
episode is here: https://www.youtube.com/watch?v=4spNsmlxWVQ,
"somebody has to own the residual value of the company, sam controls the non profit, and so the non profit after all equity gets paid out at lower valuations, owns the whole company. Sam altman controls all of open ai if its a trillion dollar valuation. Which if true would be a huge scandal"
That you did not know that does not give me confidence in the rest of your argument. Please do your research. There's a LOT of hype to see beyond.
Altman has been at OpenAI since the beginning, and since the beginning OpenAI is heavily premised on AGI/superintelligence.
Wow, that university rings some bells https://en.wikipedia.org/wiki/Singularity_Group#Controversie...
"An investigative report from Bloomberg Businessweek found many issues with the organization, including an alleged sexual harassment of a student by a teacher, theft and aiding of theft by an executive, and allegations of gender and disability discrimination.[12] Several early members of Singularity University were convicted of crimes, including Bruce Klein, who was convicted in 2012 of running a credit fraud operation in Alabama, and Naveen Jain, who was convicted of insider trading in 2003.[12]
In February 2021, during the COVID-19 pandemic, MIT Technology Review reported that a group owned by Singularity, called Abundance 360, had held a "mostly maskless" event in Santa Monica ... The event, led by Singularity co-founder Peter Diamandis, charged up to $30,000 for tickets."
"Sam Altman was actually typing out all the chatgpt responses himself and the board just found out"
No lol: https://www.foxnews.com/media/elon-musk-hints-at-lawsuit-aga...
I wouldn't be surprised if the leadership direction of sam is related to the ousting.
More importantly, OpenAI's claim (whether you believe it or not) has always been that their structure is optimised towards building AGI, and that everything else including the for-profit part is just a means to that end: https://openai.com/our-structure and https://openai.com/blog/openai-lp
Either the board doesn't actually share that goal, or what you are saying shouldn't matter to them. Sam isn't an engineer, it's not his job to make the breakthrough, only to keep the lights on until they do if you take their mission literally.
Unless you're arguing that Sam claimed they were closer to AGI to the board than they really are (rather than hiding anything from them) in order to use the not-for-profit part of the structure in a way the board disagreed with, or some other financial shenanigans?
As I said, I hope you're right, because the alternative is a lot scarier.
https://finance.yahoo.com/news/softbank-takes-14b-hit-wework...
Adam is good making people rich, but those people are not his investors.
https://twitter.com/phuckfilosophy/status/163570439893983232...
https://www.youtube.com/watch?v=29MPk85tMhc
>That guy definitely fucks that robot, right?
That "handsy greasy little weirdo" Silicon Valley character Ariel and his robot Fiona were obviously based on Ben Goertzel and Sophia, not Sam Altman, though.
https://en.wikipedia.org/wiki/Ben_Goertzel
https://www.reddit.com/r/SiliconValleyHBO/comments/8edbk9/th...
>The character of Ariel in the current episode instantly reminded me of Ben Goertzel, whom i stumbled upon couple of years ago, but did not really paid close attention to his progress. One search later:
VIDEO Interview: SingularityNET's Dr Ben Goertzel, robot Sophia and open source AI:
https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...
> Only a minority of board members are allowed to hold financial stakes in the partnership at one time. Furthermore, only board members without such stakes can vote on decisions where the interests of limited partners and OpenAI Nonprofit’s mission may conflict—including any decisions about making payouts to investors and employees.
So given the latest statement from the board emphasizing their mission, it could be that Brockman and Sutskever were not able to participate in the board decision to fire Altman, making it a 3-to-2 or 4-to-1 vote against Altman.
https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...
PDSCodes 27 minutes ago | unvote | parent | context | flag | favorite | on: OpenAI's board has fired Sam Altman
Turn that on it’s head - was he standing in the way of a commercial sale or agreement with Microsoft!
He may not be the villain.
But who knows, it feels like an episode of silicon valley!
DonHopkins 22 minutes ago | prev | edit | delete [–]
I can do anything I want with her - Silicon Valley S5:
https://www.youtube.com/watch?v=29MPk85tMhc
>That guy definitely fucks that robot, right?
That "handsy greasy little weirdo" Silicon Valley character Ariel and his robot Fiona were obviously based on Ben Goertzel and Sophia, not Sam Altman, though.
https://www.reddit.com/r/SiliconValleyHBO/comments/8edbk9/th...
>The character of Ariel in the current episode instantly reminded me of Ben Goertzel, whom i stumbled upon couple of years ago, but did not really paid close attention to his progress. One search later:
VIDEO Interview: SingularityNET's Dr Ben Goertzel, robot Sophia and open source AI:
I don't really see anything[1] that suggests that this sentence is true. Now, I'm not saying that he hasn't been successful, but there's "successful" and then there's your hyperbole.
I think your assumption is misinformed. I asked ChatGPT the same question, and it looked up the news online and delivered a sparser, but accurate reply.
The GPT4 knowledge cutoff was recently updated to April 2023, btw.
https://chat.openai.com/share/66e87457-834f-422f-9b16-40902b...
https://en.wikipedia.org/wiki/ChaCha_(search_engine)
Seriously though... I just remembered this was a thing and now I'm having crazy nostalgia.
* normally we wouldn't do that, but in threads that have a YC connection we moderate less, not more - see https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
> There’s been a vibe change at openai and we risk losing some key ride or die openai employees.
Actually I normally would have detached it from the parent, especially because it's part of a top-heavy subthread, but I specifically didn't do that in this case because of the principle described here: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu....
EDIT: A somewhat more detailed view of the structure, based on OpenAI’s own description, is at >>38312577
The entire final storyline is about an AI trying to take over -- if you haven't watched it, you should! But many of my friends who live and work in Silicon Valley can't stand watching it, because it strikes too close to home, not because it isn't funny.
I think it's much more likely that Elon Musk fucked a robot, after having mistaken it for a human being in a robot suit.
Scroll down on the page, OpenAI is listed as a model provider, with logo and everything.
Or do you mean some kind of more 'direct' deal with military?
It was abundantly obvious how he was using weasel language like "I'm very 'nervous' and a 'little bit scared' about what we've created [at OpenAI]" and other such BS. We know he was after "moat" and "regulatory capture", which we know where it all leads to — a net [long-term] loss for the society.
[1] >>35960125
With this apparent rush, I'd habour the guess that the situation just happened to unfold on a Friday and wasn't planned as such.
https://learn.microsoft.com/en-us/legal/cognitive-services/o...
Folks like Schmidt, Levchin, Chesky, Conrad have twitter posts up that weirdly read like obituaries.
[1] https://www.irs.gov/charities-non-profits/publications-for-e...
> I feel compelled as someone close to the situation to share additional context about Sam and company.
> Engineers raised concerns about rushing tech to market without adequate safety reviews in the race to capitalize on ChatGPT hype. But Sam charged ahead. That's just who he is. Wouldn't listen to us.
> His focus increasingly seemed to be fame and fortune, not upholding our principles as a responsible nonprofit. He made unilateral business decisions aimed at profits that diverged from our mission.
> When he proposed the GPT store and revenue sharing, it crossed a line. This signaled our core values were at risk, so the board made the tough decision to remove him as CEO.
> Greg also faced some accountability and stepped down from his role. He enabled much of Sam's troubling direction.
> Now our former CTO, Mira Murati, is stepping in as CEO. There is hope we can return to our engineering-driven mission of developing AI safely to benefit the world, and not shareholders.
[0] https://www.reddit.com/r/OpenAI/comments/17xoact/sam_altman_...
[1] take it with a grain of salt
According to their website, It's four entities:
1. OpenAI Global LLC (the for-profit firm that does most of the actual work), which Microsoft and #2 co-own.
2. A holding company, which #3 controls and #4 and other investors own.
3. OpenAI GP LLC, a management entity that #4 owns and which controls #3.
4. The OpenAI Nonprofit.
(There's a blog entry about OpenAI LP, a for-profit limited partnership, being founded in 2019, and I've seen information about them from earlier in 2023, but they aren't listed in the current structure. That might be the holding company, with the other investors as limited partners; its odd, if so, that its not named on the structure diagram and description.)
https://www.folklore.org/StoryView.py?project=Macintosh&stor...
https://x.com/openai/status/1725611900262588813
How crazy is that?!
(Edit 2 minutes after) .. and /there/ Greg quit!!
I kid you not, sitting in a fancy seat, Altman is talking about "Platonic ideals". See the penultimate question on whether AI should be prescriptive or descriptive about human rights (around 1h 35sec mark). I'll let you decide what to make of it.
Greg reigned. Things are happening fr
Microsoft had inside information about their security, which is why they restricted access. Meanwhile, every other enterprise and gov organisation using ChatGPT is exposed.
https://www.reuters.com/article/us-microsoft-settlement/micr...
This is hardly unexpected for profound allegations without strong supporting evidence, and yes, I'm well aware that presentation of any evidence would be difficult to validate on HN, such that a third-party assessment (as in a court of law, for example) would typically be required.
I'm not claiming that HN has a stellar record of dealing with unpleasant news or inconvenient facts. But that any such bias originates from YC rather than reader responses and general algorithmic treatments (e.g., "flamewar detector") is itself strongly unsupported, and your characterisation above really is beyond the pale.
I'm not sure, I agree with your point re wording but the situation with his sister that really got resolved, so I can't help but wonder if it's related. https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...
@dang, why have you been saying you're working on performance improvements re: pagination for three years[0]? Are there any prior architectual decisions holding you back? The "Click more" on very popular topics has turned into a bit of a meme.
[0]: https://hn.algolia.com/?dateRange=all&page=2&prefix=true&que...
Sam Altman in particular has precedent, with Worldcoin, that should make you wary of defending him on that particular point.
https://www.buzzfeednews.com/article/richardnieva/worldcoin-...
https://www.technologyreview.com/2022/04/06/1048981/worldcoi...
She also says that there will be many more top employees leaving.
Speculations about these source materials can be traced back as far as 2020: https://twitter.com/theshawwn/status/1320282152689336320
I don't think this issue would've flown under the radar for so long, especially with the implication that Ilya sided with the rest of the board to vote against Sam and Greg.
https://www.theverge.com/2018/3/6/17086276/google-ai-militar...
But that $1 salary thing got quoted into a meme, and people didn't understand the true implication.
The idea is that employee and CEO incentives should be aligned -- they are part of a team. If Jobs actually had NO equity like Altman claims, then that wouldn't be the case! Which is why it's important for everyone to be clear about their stake.
It's definitely possible for CEOs to steal from employees. There are actually corporate raiders, and Jobs wasn't one of them.
(Of course he's no saint, and did a bunch of other sketchy things, like collusion to hold down employee salaries, and financial fraud:
https://www.cnet.com/culture/how-jobs-dodged-the-stock-optio...
The SEC's complaint focuses on the backdating of two large option grants, one of 4.8 million shares for Apple's executive team and the other of 7.5 million shares for Steve Jobs.)
I have no idea what happened in Altman's case. Now I think there may not be any smoking gun, but just an accumulation of all these "curious" and opaque decisions and outcomes. Basically a continuation of all the stuff that led a whole bunch of people to leave a few years ago.
Worldcoin. Which is, to put it mildly, not positive.
https://www.technologyreview.com/2022/04/06/1048981/worldcoi...
https://www.buzzfeednews.com/article/richardnieva/worldcoin-...
https://nymag.com/intelligencer/article/sam-altman-artificia...
https://www.independent.co.uk/tech/chatgpt-ai-agi-sam-altman...
I don't really get "meme" culture but is that really how someone who believed their company is going to create AGI soon would behave? Turning the possibility of the success of their mission into a punchline?
According to Wikipedia it's "the East Slavic form of the male Hebrew name Eliyahu (Eliahu), meaning 'My God is Yahu/Jah.'"
[1] https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...
Isn't this already generally known to be true (and ironically involving Mechanical Turk-like services)?
Not sure if these are all the same sources I read a while ago, but E.G.:
https://www.theverge.com/features/23764584/ai-artificial-int...
https://www.marketplace.org/shows/marketplace-tech/human-lab...
https://www.technologyreview.com/2022/04/20/1050392/ai-indus...
https://time.com/6247678/openai-chatgpt-kenya-workers/
https://www.vice.com/en/article/wxnaqz/ai-isnt-artificial-or...
https://www.noemamag.com/the-exploited-labor-behind-artifici...
https://www.npr.org/2023/07/06/1186243643/the-human-labor-po...
I just tried "Write a summary of the content, followed by a list in bullet format of the most interesting points. Bold the bullet points, followed by a 100-character summary of each." Here's the output: https://s.drod.io/DOuPLxwP
Also interesting is "List the top 10 theories of why Sam Altman was fired by the OpenAI board in table format, with the theory title in the first column and a 100 word summary in the second column." Here's that output: https://s.drod.io/v1unG2vG
Helps to turn markdown mode on to see the list & table.
Hope that helps!
"In a post to X Friday evening, Mr. Brockman said that he and Mr. Altman had no warning of the board’s decision. “Sam and I are shocked and saddened by what the board did today,” he wrote. “We too are still trying to figure out exactly what happened.”
Mr. Altman was asked to join a video meeting with the board at noon on Friday and was immediately fired, according to Mr. Brockman. Mr. Brockman said that even though he was the chairman of the board, he was not part of this board meeting.
He said that the board informed him of Mr. Altman’s ouster minutes later. Around the same time, the board published a blog post."
[1] https://www.nytimes.com/2023/11/17/technology/openai-sam-alt...
> Serious revenue streams like having Google for a patron yes? I feel like the context is important here because […]
For that specific example, Mozilla did also go with Yahoo for as-good revenue for a couple of years IIRC, and they are also able to (at least try to) branch out with their VPN, Pocket, etc. The Google situation is more a product of simply existing as an Internet-dependent company in the modern age, combined with some bad business decisions by the Mozilla Corpo, that would have been the case regardless of their ownership structure.
> Which is great and possible in theory, but […] is ultimately only sustainable because of a patron who doesn't share in exemplifying that same idealism.
The for-profit-owned-by-nonprofit model works, but as with most things it tends to work better if you're in a market that isn't dominated by a small handful of monopolies which actively punish prosocial behaviour:
https://en.wikipedia.org/wiki/Stichting_IKEA_Foundation
https://foundation.mozilla.org/en/what-we-fund/
> people are trying to defend OpenAI's structure as somehow well considered and definitely not naively idealistic.
Ultimately I'm not sure what the point you're trying to argue is.
The structure's obviously not perfect, but the most probable alternatives are to either (1) have a single for-profit that just straight-up doesn't care about anything other than greed, or (2) have a single non-profit that has to rely entirely on donations without any serious commercial power, both of which would obviously be worse scenarios.
They're still beholden to market forces like everybody else, but a couple hundred million dollars in charity every year, plus a couple billion-dollar companies that at least try to do the right thing within the limits of their power, is obviously still better than not.
[1] - https://sfstandard.com/2023/11/17/openai-sam-altman-firing-b...
https://www.forbes.com/sites/davidjeans/2023/10/23/eric-schm...
>What happened at OpenAI today is a Board coup that we have not seen the likes of since 1985 when the then-Apple board pushed out Steve Jobs. It is shocking; it is irresponsible; and it does not do right by Sam & Greg or all the builders in OpenAI.
https://www.japantimes.co.jp/business/2023/11/08/companies/s...
- "anyone with the link"
- "only my organization" (i.e., people who have registered w/ the same biz email domain)
- "just me"
You can see those SmartChat™ dynamic container tags because I have at least one piece of "anyone with the link" content in each of them.
Our goal is to de-silo content as much as possible -- i.e., as much as the person who's uploading the content wants it to be open vs. closed.
More at https://www.web.storytell.ai/support/smartchat-tm/how-to-man...
For example:
- We have a Chrome extension at https://go.Storytell.ai/chrome that I used to ingest all the HN comments; you can run that on any HN page to summarize all the comments in real time. (Here's an Adobe PMM talking about how he uses it: https://www.tiktok.com/@storytell.ai/video/72996137210752566... )
- We've also built OpenAI's Assistant API into Storytell to process both structured data like CSVs along-side unstructured data like PDFs: https://www.web.storytell.ai/support/engineering-demos-updat...
Apparently Microsoft was also blindsided by this.
https://www.axios.com/2023/11/17/microsoft-openai-sam-altman...
You can also get to the "ground truth" data by clicking on the [x] reference foot notes which will open up a 3rd panel with the Story Tiles that we pull from our vector DB to construct the LLM response.
Here's an example of how it works -- I asked for a summary of what happened in the voice of Dr. Seuss: https://s.drod.io/9ZuL6Xx8
One of the board of directors that fired him co-signed these AI principles (https://futureoflife.org/open-letter/ai-principles/) that are very much in line with safeguarding general intelligence
Another of them wrote this article (https://www.foreignaffairs.com/china/illusion-chinas-ai-prow...) in June of this year that opens by quoting Sam Altman saying US regulation will "slow down American industry in such a way that China or somebody else makes faster progress” and basically debunks that stance...and quite well, I might add.
Here's the top-most featured snippet when I google if programming languages had honest slogans: https://medium.com/nerd-for-tech/if-your-favourite-programmi...
Half of the above post is plagiarised from my 2020 post: https://betterprogramming.pub/if-programming-languages-had-h...
Real chance of an exodus, which will be an utter shame.
Listening to it again now, it feels like he might have know what is going on:
https://youtu.be/JN3KPFbWCy8?si=WnCdW45ccDOb3jgb&t=5100
Edit: Especially this part: "It was created as a non-profit open source and now it is a closed-source for maximum profit... Which I think is not good carma... ..."
1. (6015) Stephen Hawking dying
2. (5771) Apple's letter related to the San Bernardino case
3. (4629) Sam Altman getting fired from OpenAI (this thread)
4. (4338) Apple's page about Steve Jobs dying
5. (4310) Bram Moolenaar dying
Ilya clearly has a different approach to Sam
Most likely their share is this high is to guarantee no other company will compete for the share or IP. OpenAI non-profit also excluded anything that will be considered "AGI" from deal with Microsoft.
that being said, it is highly intelligent, capable of reasoning as well as a human, and passes IQ tests like GMAT and GRE at levels like the 97th percentile.
most people who talk about Chat GPT don't even realize that GPT 4 exists and is orders of magnitude more intelligent than the free version.
https://techcrunch.com/2023/02/21/the-non-profits-accelerati...
IMO the main reason it's distinguishable is because it keeps explicitly telling you it's an AI.
The first thing I saw this morning was this video [1] shared on Reddit, and then I said, "Wow! This is really scary to just think about. Nice try anyway."Then I started my computer and, of course, checked HN and was blown by this +4k thread, and it turned out the video I watched was not made for fun but was a real scenario!
I know this feels hard. After spending years building such a successful company with an extremely exceptional product and, without a hint or warning, you find yourself fired!
This tragedy reminds me of Steve Jobs and Jack Dorsey, where they were kicked out of the companies they found, but they both were able to found another company and were extremely successful. Will Sam be able to do it? I don't know, but the future will reply with a detailed answer for sure.
______________________
1. https://twitter.com/edmondyang/status/1725645504527163836
I follow Roger Penrose's thinking here. [1]
> Everything I'd heard about those 3 [Elon Musk, sama and gdb] was that they were brilliant operators and that they did amazing work. But it felt likely to be a huge culture shock on all sides.
> But the company absolutely blossomed nonetheless.
> With the release of Codex, however, we had the first culture clash that was beyond saving: those who really believed in the safety mission were horrified that OAI was releasing a powerful LLM that they weren't 100% sure was safe. The company split, and Anthropic was born.
> My guess is that watching the keynote would have made the mismatch between OpenAI's mission and the reality of its current focus impossible to ignore. I'm sure I wasn't the only one that cringed during it.
> I think the mismatch between mission and reality was impossible to fix.
jph goes on in detail in this Twitter thread: https://twitter.com/jeremyphoward/status/1725714720400068752
Also, the paradox in the reactions to Sam Altman's firing is striking:
while there's surprise over it, the conversation here focuses mostly on its operational impact, overlooking the human aspect.
This oversight itself seems to answer why it happened – if the human element is undervalued and operations are paramount, then this approach not only explains the firing but also suggests that it shouldn't be surprising.
Another important question not discussed here: who sits on the board of OpenAI exactly and in full?
Another important aspect: The Orwellian euphemism used in the official announcement^0: “Leadership transition”. Hahaha :) Yes, I heard they recently had some "leadership transitions" in Myanmar, Niger and Gabon, too. OpenAI announces “leadership transition” is November 2023’s “Syria just had free and fair elections”
0: https://openai.com/blog/openai-announces-leadership-transiti...
[0] https://en.wikipedia.org/wiki/IBook#iBook_G3_(%22Clamshell%2... [1] https://www.youtube.com/watch?v=1MR4R5LdrJw
- Sam Altman _briefly_ went on record saying that openAI was extremely GPU constrained. Article was quickly redacted.
- Most recent round literally was scraping the bottom of the barrel of the cap table: https://www.theinformation.com/articles/thrive-capital-to-le...
- Plus signups paused.
If OpenAI needs gpu to succeed, and can't raise any more capital to pay for it without dilution/going past MSFT's 49% share of the for-profit entity, then the corporate structure is hampering the company's success.
Sam & team needed more GPU and failed to get it at OpenAI. I don't think it's any more complex than that.
https://en.wikipedia.org/wiki/OpenAI#2019:_Transition_from_n...
https://www.reddit.com/r/ClaudeAI/comments/166nudo/claudes_c...
Q: Can you decide on a satisfying programming project using noisemaps?
A: I apologise, but I don't feel comfortable generating or discussing specific programming ideas without a more detailed context. Perhaps we could have a thoughtful discussion about how technology can be used responsibly to benefit society?
It's astonishing that a breakthrough as important as LLMs is being constantly blown up by woke activist employees who think that word generators can actually have or create "safety" problems. Part of why OpenAI has been doing so well is because they did a better job of controlling the SF lunatic tendencies than Google, Meta and other companies. Presumably that will now go down the toilet.
The board is for the non-profit that ultimately owns and totally controls the for-profit company.
Everyone that works for or invests in the for-profit company has to sign an operating agreement that states the for-profit actually does not have any responsibility to generate profit and that it's primary duty is to fulfill the charter and mission of the non-profit.
Is it fundamental? I don't think so. GPT was trained largely on random internet crap. One of popular datasets is literally called The Pile.
If you just use The Pile as a training dataset, AI will learn very little reasoning, but it will learn to make some plausible shit up, because that's the training objective. Literally. It's trained to guess the Pile.
Is that the only way to train an AI? No. E.g. check "Textbooks Are All You Need" paper: https://arxiv.org/abs/2306.11644 A small model trained on high-quality dataset can beat much bigger models at code generation.
So why are you so eager to use a low-quality AI trained on crap? Can't you wait few years until they develop better products?
2018 NYT article: https://www.nytimes.com/2018/04/19/technology/artificial-int...
openAI recently updated their “company structure” page to include a note saying the Microsoft deal only applies to pre-AGI tech, and the board determines when they’ve reached AG
There's far more to the world than China on top of that and importantly developments happen both inside and outside of the scope of regulatory oversight (usually only heavily commercialized products face scrutiny) and China itself will eventually catch up to the average - progress is rarely a non-stop hockey stick, it plateaus. LLMs might already be hitting a wall https://twitter.com/HamelHusain/status/1725655686913392933)
The Chinese are experts at copying and stealing Western tech. They don't have to be on the frontier to catch up to a crippled US and then continue development at a faster pace, and as we've seen repeatedly in history regulations stick around for decades after their utility has long past. They are not levers that go up and down, they go in one direction and maybe after many many years of damage they might be adjusted, but usually after 10 starts/stops and half-baked non-solutions papered on as real solutions - if at all.
(def allstories ()
"All visible loaded stories"
(keep cansee (vals items*)))
(def mostvoted (n (o stories (allstories)))
"N most upvoted stories"
(bestn n (compare > len:!votes) stories))
(def votetimes (s)
"The timestamp of each vote, in ascending order"
(sort < (map car s!votes)))
; save vote timestamps for top 10 most upvoted stories
; each line contains the story id followed by a list of timestamps
(w/outfile o "storyvotes.txt"
(w/stdout o
(each s (mostvoted 10)
(apply prs s!id (votetimes s))
(prn))))
; paste storyvotes.txt to https://gist.github.com/ and post the url here
Note that this prints the timestamp of all votes, whereas each story's score is vote count minus sockpuppet votes.If you don't want to reveal the timestamps of every vote, you could randomly drop K timestamps for each story, where K is the vote count minus the score. (E.g. >>3078128 has 4338 points, and you'll only reveal 4338 timestamps.) Since there are thousands of votes, this won't skew the scatterplot much.
Many of the people working for mass media are their own worst enemy when it comes to the profession's reputation. And then they complain that there's too much distrust in the general public.
Anyway,the short regarding that project is that they use biometric data, encrypt it and put a "hash"* of it on their blockchain. That's been controversial from the start for obvious reasons although most of the mainstream criticism is misguided and by people who don't understand the tech.
*They call it a hash but I think it's technically not.
That said my comment was looking mainly at the result of Marxist ideology in practice. In practice millions of lives were lost in an attempt to create an idealized world. Here is a good paper on Stalin’s utopian ideal [2].
[1] https://www.jstor.org/stable/10.7312/chro17958.7?searchText=...
GitHub Copilot is an auto-completer, and that's, perhaps, a proper use of this technology. At this stage, make auto-completion better. That's nice.
Why is it necessary to release "GPTs"? This is a rush to deliver half-baked tech, just for the sake of hype. Sam was fired for a good reason.
Example: Somebody markets GPT called "Grimoire" a "100x Engineer". I gave him a task to make a simple game, and it just gave a skeleton of code instead of an actual implementation: https://twitter.com/killerstorm/status/1723848549647925441
Nobody needs this shit. In fact, AI progress can happen faster if people do real research instead of prompting GPTs.
Even using an anonymous account on HN, I'd never express such certainty unaccompanied by any details or explanation for it.
The people on the following list are much wealthier than that VC guy:
https://en.wikipedia.org/wiki/List_of_Tiger_Cubs_(finance)
You can find them on Twitter promoting unsourced COVID vaccine death tolls, claims of "obvious" election fraud in every primary and general election Trump ran in, and I've even seen them tweet each other about Obama's birth certificate being fake as late as 2017. Almost all of them promote the idea that the COVID vaccine is poison and almost all of them promote the idea that Trump hasn't received fair credit for discovering that same vaccine. They're successful because they jerked off the right guy the right way and landed jobs at Tiger.
https://twitter.com/satyanadella/status/1726509045803336122
"to lead a new advanced AI research team"
I would assume that Microsoft negotiated significant rights with regards to R&D and any IP.
If it had been, we wouldn't now be facing an extinction event.