Jakub Pachocki, director of research; Aleksander Madry, head of AI risk team, and Szymon Sidor.
Scoop: theinformation.com
Paywalled link: https://www.theinformation.com/articles/three-senior-openai-...
Submitting information since paywalled links are not permitted.
https://www.semanticscholar.org/author/J.-Pachocki/2713380?s...
As @eachro pointed out, Aleksander Madry is on leave from his MIT professorship. His publications:
In other news, Tesla FSD has been rebranded Saneer-Weeksbooth Autobot. So be careful out there folks.
Advice: you can't win over a narrative, which is what Sam has become. People and resources will come to him, by themselves.
How much more cost effective to just fool half the board of a nonprofit into taking unnecessarily aggressive action.
"Goddamn, how dare they invent the bigger cannons!?" - Romans, 1453, in Constantinople probably. (One incident where I can use my exempt powers).
We've already trained it on all the data there is, it's not going to get "smarter" and it'll always lack true subjective understanding, so the overhype has been real, indeed to bubble levels as per OP.
What is your basis for those claims? Especially the first one; I would think it's obvious that it will get smarter; the only questions are how much and how quickly. As far as subjective understanding, we're getting into the nature of consciousness territory, but if it can perform the same tasks, it doesn't really impact the value.
AI is as real as the mobile/internet/pc revolution of the past.
So many use it obsessively every single day.
Sam & Greg could start a new AI company by Monday and instantly achieve unicorn valuation. Hardly a burst.
That's got to be worth something, since Alphabet is a $1.7T company mostly on the strength of ads associated with Google search.
Only time employees leaving is a bad sign had been when the company had been in zombie mode too long(few years) and haven’t delivered anything yet or still in alpha/beta stage with crappy deliveries no one is using or has no significant uptake.
Also, if there are financial difficulties, then a lot of people leave in droves and some actually come back after finances get better.
These are huge losses. Pachocki led pre-training for GPT-4, and probably GPT-5. Brockman is the major engineer responsible for the efficiency improvements that enabled ChatGPT and GPT-4 to be even remotely cost-effective. That is a piece that is often overlooked, but OpenAI's advantage over the competition in compute efficiency is probably even larger than the model itself.
Hugely more interested in the open source models now, even if they are not as good at present. Because at least there is a near-100% guarantee that they will continue to have community support no matter what; the missing problem I suppose is GPUs to run them.
Don't know what to do. Is my investment into their API still worth it? It feels very unstable at this moment.
The fact that it’s all well connected wealthy people is kind of the point, the board is there (among others) to bring advice and experience
So here's my theory which might sound crazy. Sam planned to open a new AI company and taking away openAI's top talents to his company. And breaking up openAI into non profit and his for profit company.
Sam's first tweet after all this has, just hours after this article:
> will have more to say about what’s next later.
So either he knew that he was about to be fired or at least was prepared.
Also based on the wording of the press release, Sam did something that the board absolutely hated. Because most of the time even if he did something illegal it doesn't make sense to risk defamation by accusing him publically. Also based on his video of yesterday at the APEC summit, he repeated the similar lines few times:
> I am super excited. I can't imagine anything more exciting to work on.
So here if we assume he knew he was about to get fired, the conclusion is clear.
If you build out this way then when the next greatest LLM comes out you can plug that into your interface and switch the tasks it’s best at over.
I agree, it's greatly undervalued!
https://time.com/collection/time100-ai/6309033/greg-brockman...
With evidence, or is this the kind of pure speculation that media indulges in when they have no information and have to appear knowledgeable?
AI Art is currently in very early stage. In the real art space (3d modeling, sculpting, animation, vfx, animation, rigging, retargeting), it could make huge breakthroughs and multiply true artists' productivity in significant ways
Though my investment will be still tiny at the moment, but not other multi-modal model on the market right is as good.
My guess is they know something we don't Or they assume sama being fired means the trajectory of OpenAI as a company has changed or is likely to change significantly?
xAI recently showed that training a decent-ish model is now a multi-month effort. Granted GPT-4 is still farther along than others but curious how many months/resources does that add up when you have the team that built it in the first place
But also, starting another LLM company might be too obvious a thing to do. Maybe Sam has another trick up his sleeve? Though I suspect he is sticking with AI one way or the other
Again - to each their own. But what people use google for GPT doesn’t replicate anyway (and what the Google business was built on) - which is commercial info retrieval.
That being said, I might not go further relying on their APIs for something more serious
openai was that backed by a nonprofit structure. and it still caused sam to be michael dell'ed/steve job'ed.
seems like the issue was with having a board/didnt have a majority on it. Zuckerberg having 53% voting power on the board is probably the greatest thing he managed. Anything sam does from now should follow the same.
/disclaimer - i have no idea how voting shares work.
Microsoft seems like one of the more reliable partners to build on compared to Google etc. just for the simple reason that their customers are large businesses and not breaking things for them is in their blood. Just like Windows backwards compatibility.
AI tools will need a similar plugin like approach.
I haven't finished making up my mind, the the AI's are doing OK. I have only been asking it for code snippets that are easily verifiable.
For the majority of my use of ChatGPT and Google, I need to be able to get useful answers to vague questions - answers that I can confirm for myself through other means - and I need to iterate on those questions to hone in on the problem at hand. ChatGPT is undoubtedly superior to Google in that regard.
Would it really have been so hard to find out Sam Altman just got fired without notice?
Look, I'm an AGI/AI researcher myself. I believe and bleed this stuff. AI is here to stay and is forever a part of computing in many ways. Sam Altman and others bastardized it by overhyping it to current levels, derailing real work. All the traction OpenAI has accumulated, outside of github copoilot / codex, is itself so far away from product-market fit that people are playing off the novelty of AGI / the GPT/AI being on its way to "smarter" than human rather than any real usage.
Hype in tech is real. Overhype and bubbles are real. In AI in particular, there's been AI winters because of the overhype.
What exactly are you saying?
Like, GitHub Copilot may be amazing, but if it looses money for every added user, if power users loose the company 4 or 8 times what they already pay, then maybe it's not an efficient use of compute resources.
It would be my life's dream to spend 80 hours per week coding without having to communicate with others... but no one is an island...
If I was OpenAI I'd want to quit today not because I want to follow Sam, but because the same bullshit that people left Google Brain et al. for has managed to catch up with them at OpenAI. It's a shame honestly, it was so exciting to see a company finally free itself of the shackles of navel gazing and just build things, but it seems like that's over today.
The narrative I am referring to is a simple one: take back what should be his. A.k.a, revenge. That is for sure a strong word.
People live on stories (collective imagination, if fancier), and what they like most is a wronged prince/princess took back his/her crown. It is the same with Taylor Swift rerecording her albums. The story potential will feed itself, until it is fully realized. The OpenAI board had committed a historical misstep, but maybe it is indeed what it is designed for: they hold no stakes in the business, so their view can't be held accountable in a business microscope. But money will really dislike it.
It's a bubble if the valuation is inflated beyond a level compared to a reasonable expected future value. Usefulness isn't part of that. The important bit is 'reasonable', which is also the subjective bit.
Have you used gpt4? it's reasoning capabilities match human ability. The more you think about it, the more scary the reality becomes. GPT 4 can reason through any mental exercise as well as a human. The rest of the work to make it autonomous is simple in comparison.
1) we should agree on what we mean by smart or intelligent. That's really hard to do so let's narrow it down to "does not hallucinate" the way GPT does, or more high level has a subjective understanding of its own that another agent can reliably come to trust. I can tell you that AI/deep learning/LLM hallucination is a technically unsolvable problem, so it'll never get "smarter" in that way.
2) This connects to number 2. Humans and animals of course aren't infinitely "smart;" we fuck up and hallucinate in ways of our own, but that's just it, we have a grounded truth of our own, born of a body and emotional experience that grounds our rational experience, or the consciousness you talk about.
So my claim is really one claim, that AI cannot perform the same tasks or "true" intelligence level of a human in the sense of not hallucinating like GPT without having a subjective experience of its own.
There is no answer or understanding "out there;" it's all what we experience and come to understand.
This is my favorite topic. I have much more to share on it including working code, though at a level of an extremely simple organism (thinking we can skip to human level and even jump exponentially beyond that is what I'm calling out as BS).
Google search reminds me of Amazon reviews. Years ago, basically trustworthy, very helpful. Now ... take them with a tablespoon of salt and another of MSG.
And this is separate from the time-efficiency issue: "how quickly can I answer my complex question which requires several logical joins?", which is where ChatGPT really shines.
If they waited for the GPT5 pretraining to finish and then they minimized the cost of the loss of Altman and the engineers.
The whole secrecy, compartmentalization and urgency of their actions could only be explained by being against a wall. Otherwise if it was about ethics, future plans or whatever political it would happen at a slower pace.
Hope they involved their investors beforehand but I don't know if they had time, OpenAI probably still exists and evolves on other people's money. But what else could they do?
GPT is very useful as a knowledge tool, but I don’t see people going there to make purchasing decisions. It replaces stackoverflow and quora, not Google. For shopping, I need to see the top X raw results, with reviews, so I can come to my own conclusion. Many people even find shopping fun (I don’t) and wouldn’t want to replace the experience with a chatbot even if it were somehow objectively better.
it's like you wrote this comment yesterday, and not in the context of what has happened
the dataset is the crucial bit of openAI. that takes a lot of time and money to make. So its perfectly possible for openAI to carry on innovating without these people.
But equally, it could turn to shit.
However, Sam isn't jesus, so its not like he's going to magically make another successful company.
I guess he was trying to seem cool and approachable, but to me it just reads as unprofessional. I find it hard to take a piece of writing seriously when it’s formatted that way.
Madry is actually spelled Mądry, which translates to "Wise".
Peak is perhaps the wrong word, local maximum before falling into the Trough of Disillusionment.
Did you see the recent article about a restaurant changing its name to "Thai Food near me"?
Did nobody read this damn article? It's 5 paragraphs long, take 1 minute.
I guess that’s what happens if you’re working 60-100 hours per week coding ChatGPT, and you don’t have time to waste. Sometimes I feel sorry for these kind of people and other times I envy them.
You really have to have a passion for coding to put in the hours and be very good at it. Incredibly rare, believe it or not. Lots of people think they are good coders but this is another level. Proof is in your commit/code review count/async comms being 10x-100x of everyone else in your org, and it's clear you're single-handedly enabling delivery of major projects earlier than anyone else could. Think of the pressure of doing this continuously.
What about he spends 4 hours a week coding cause he’s so good at coding.
Way more impressive.
It might indicate intelligence and people who are extra busy, thus not wasting their time, but depending on the position and circumstances a nicely written text is essential.
An exodus of staff might provoke a very quick release of a lot of code, pipelines, and training data by the remaining board before investors' lawyers have a chance to stop it.
They wouldn't have been able to do that even before Sam's dismissal
I'm not saying this will happen, but it seems to me like an incredibly silly move.
I predict the board will be fired, and Sam and the team will return and try to contain the situation.
https://chat.openai.com/share/c35e3fd1-d94e-477b-a331-b14384...
In addition, it would likely be some time, possibly years, before it would be ready for production.
Perhaps recent events have just brought that more clearly into focus for you.
People stopping to use google for the small stuff will be the beginning of the end of google being the mental default for searches.
You have to watch out with that.. I've seen whole projects pushed through by management where no one else was involved enough to review normally, but everyone had an interaction that implied they had only seen the top of the iceberg of problems with it.
I think the main reason I find it unprofessional is that it is such a distraction. Professional communication should be clear and “get out of the way” so that the focus can be on the ideas being discussed.
However! The best engineers I've been around do work a lot and they like it.
Not that I think AGI is possible or desirable in the first place, but that's a different discussion.
If they try to integrate OpenAI, they will suffocate it.
And people who are invested into “social impact” still care about their wealth and well-being despite living luxurious lives by the standards even of the developed world, let alone all other countries.
The rich will ALWAYS get their piece of the pie, and once they've had their fill, we'll be left fighting for the crumbs and thanking them for their generosity.
AI won't solve world hunger, it will make millions of people jobless. It won't stop wars, it will be used as a tool for the elite to spread propaganda. The problems that plague society today are ones that technology (that has existed for decades) can fix but greed prevents it.
So can I.
And yet, people don't consider me an existential threat.
Mostly because I do not have nukes.
If he didn't manage to keep OpenAI consistent with it's founding principles and all interests aligned then wouldn't booting him be right? The name OpenAI had become a source of mockery. If Altman/Brockman take employees for a commercial venture, it just seems to prove their insincerity about the OpenAI mission.
edits: >>38314420
I imagine when the full story comes out all these theories and speculations will be ignored and we will literally forget ever being interested in them!
imagine being someone that is just being quick, and everyone around you is inferring some motive like 'being cool', when really he isn't thinking of your perceptions at all.
Being good at something lies in the result and/or appreciation of your work by skilled pairs, which also seem to be there.
We could try to think a little more deeply about things than "let jesus take the wheel"
If there was some real misconduct by Altman, others won't be resigning with him, would they?
But I think one thing is certain, he WILL create another AI company. It seems very unlikely he would quit the business.
He was the one who partnered with Microsoft and turned it from a non-profit to a for-profit company.
Are there any good examples of this? I struggle to use ChatGPT, maybe I'm using it not cleverly (or deeply) enough.
Current GPT doesn't have a physical threat.
But, take something like the movie "Colossus". Where they did give control of nukes. That was scary.
Now, go watch the Netflix show about AI. This GPT stuff is so far just fun apps.
The military already has AI that can out pilot a human in a F-16, you think it will stop there ? That is probably already old news.
Can we get a pllleeeeeeaaaase????
He’s clearly a terrible programmer and/or a terrible chairman and to be honest this news says he’s at least 1 of 2 on the above.
> Can they give reassurances about their products going into the future?
emotional comfort is not the thing you should be looking for mate.
I call the first hour true productivity. The last part is, from the perspective of the end product, simply a wasted time. That's very similar to the boilerplate code everybody agrees is a necessary evil in the programming.
If AI allows to reduce the #2 it truly will have positive impact
Each to their own I guess. For me the attempted use of correct punctuation is a reassuring social norm. It reassures me that those entrusted with power are able to pay attention to details and conventions.
To put it another way, if we can’t trust these guys to even write a message properly, how can we trust them with the stewardship of the most advanced AI the world has ever seen?
That's a nice insight. I have been in that place many times, I was overfitting on my own imagination.
Im not claiming to know more than everyone else, but Sam was IMO just a face. Greg is backend engineer - that's less important than actual research.
[1] On one hand they serve Microsoft and developers, building digital AI infrastructure.
[2] On the other hand, they seem to try and want to build some monopoly and destroy as many startups (and companies) as possible.
In the last developer day, they did a half-ass job of both. GPTs suck. The OpenAI Assistants don't have enough documentation to be used and therefore, equally suck.
I really hope for the sake of the AI community (and economy) that [1] is the outcome. I really do not know how they could scale both. As an AI startup, I have a love-hate relationship with GPT and am eager to grow independent on them because how can I trust a company doing [2]?
But yeah, since Sam and Greg were apparently pushed out because they were building too good of a business any OpenAI employees that were aligned with them are likely to jump ship and join them, and OpenAI will revert to the non-profit research lab it started out as.
Who's supposed to pay for that?
I don't see any comments here claiming that it's something that most people could do well.
I go to Amazon if I want to find a book or a specific product.
For the latest news, I come here, or Reddit, or sometimes twitter.
If I want to look up information about a famous person or topic, I go to Wikipedia (usually via google search). I know I can ask ChatGPT, but Wikipedia is generally more up to date, well-written and highly scrutinized by humans.
The jury’s still out on exactly what role ChatGPT will serve in the long term, but we’ve seen this kind of unbundling many times before and Google is still just as popular and useful as ever.
It seems like GPT’s killer app is helping guide your learning of a new topic, like having a personal tutor. I don’t see that replacing all aspects of a general purpose search engine though.
Maybe Ilya discovered something as head of AI safety research, something bad, and they had to act on it. From the outside it looks as if they are desperately trying to gain control. Maybe he got confirmation that LLMs are a little bit conscious, LOL. No, I am not making this up: https://twitter.com/ilyasut/status/1491554478243258368
When it comes to sports it's fairly obvious what outliers look like and well accepted that they exist. I don't see a single reason to believe, that the same would not be true in every other walk of life or thinking that OpenAI just got lucky (considering how many people are trying to get lucky right now with less success in this space).
There are extraordinarily effective people in this world, and they are sparse and it's probably not you or me (but that's completely fine with me, I am happy to stretch myself to the best of my abilities).
As a part of this transition, Greg Brockman will be stepping down as chairman
of the board and will remain in his role at the company, reporting to the CEO.
[1] https://openai.com/blog/openai-announces-leadership-transiti...https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguis...
Literally people thought they were saints because of VSCode FFS.
In my company, 80% coding for a senior SWE is rare. But if they deliver, management will give them some slack on the other evaluation axis. I have colleagues who work almost by themselves on new high impact projects. This has many benefits. No need to argue about designs, code reviews (people just approve blindly their code). The downside is that you need to deliver.
The board of OpenAI should have been replaced by adults a long time ago.
Satya has been humiliated and will be furious.
The board and Ilya will all be gone within a month.
If an AI said that, we'd be calling it "capability gain" and think it's a huge risk.
Anyway, I was providing context. The way I see it, since they speak the same language natively, they might have had a frank conversation about this at some point and decided to resign in concert.
I bet he'll train models on copious amounts of synthetic data made with GPT-4. There are lots of datasets in the open. That makes catching up easier.
No public facing model can be protected from data exfiltration and distillation. All deployed skills leak, your competition will replicate with less effort. And they only need to leak once and every subsequent model can inherit the skill. I think the first movers paid a high price for being first, and will quickly see their advantage erode. Latecomers will catch up and find AI easier to work with. The difference is made by the great fine-tuning datasets that are in the open, a growing lake of distilled abilities.
Another latecomer advantage is benefiting from significant innovation in the engineering part: flash attention, quantization, continuous batching, KV caching, LoRA, and more.
The new AI era will be more equalitarian. Catching up is much easier than discovering, and we can run AI privately, unlike search engines and social networks. You can't exploit SOTA advantage at scale. Being first is a fleeting advantage, the moment you go in the open everyone replicates.
Maybe one reason this is happening is because AI skills are very composable. Any addition to the skill repertoire already fits with other skills. This makes open sourcing skills very attractive. Of course, the datasets are what is being open sourced.
Do users care about that? I care about features stability and avoidance of shitification.
That's why I am usually preferring open models to depending on OpenAI's API. This drama has me curious about the outcome and if it leads to more openness from OpenAI, it may gain me back as a user.
Edit: apparently ALDI VS LIDL is an urban myth. It's ALDI that was split in two ..
I can't imagine him doing that. He cares about getting well aligned AGI and profit motives fuck that up.
Of course, not for the petty reasons that you list. Sama has comprehensively explained why the original OS model did not work, and so far the argument – it's very expensive – seems to align with a reality where every single semi-competitive available LLM (since they all pale in comparison to GPT-4 anyway) has been trained with a whole lot of corporate money. Meta side-chaining "open" models with their social media ad money is obviously not a comparative business, or any business. I get that the HN crowd + Elon are super salty about that, but it's just a bit silly.
No, Sam's failure as CEO is not having done what is necessary to align the right people in the company with the course he has decided on and losing control over that.
A fitting typo!
Show HN: FerenGUI - the ui framework for immoral arch-capitalists
Every dark-pattern included as standard components. Upgrade to Pro to get the price-fixing and hidden monero miner modules.
Apple surely doesn’t have a cluster that at all compares with the big cloud giants.
Oracle and AWS are really the only cloud left, and oracle is already renting to Microsoft for GPU compute.
If instead of work it was something else it would be seen as a problem. 100 hours per week doesn’t leave room for anything else other than basic human needs.
“They like it”, well all addicts like what they’re addicted to, it doesn’t mean it’s healthy.
(it's about just one letter - a vs ą)
If that's the way he wants to be known, it's up to him.
Most programming work in any project and company is mundane, so I do agree someone taking care of all that without whining is actually extremely valuable. I couldn’t do it.
Still doesn’t really make sense to put him on such a pedestal like many in this thread. It seems like a cultural thing in the US to overvalue individuals, and downplay the importance of good teams.
But it is your right to assume what he works on from reading his tweets and leap from that to how this is an American cultural thing tho.
- he makes misleading statement to board
- board puts this in regulatory filing (e.g. SEC)
- board finds out this is a legally critical statement
- they _have_ to fire him in order to avoid becoming accomplices.
The reverse of the other Sam situation.
OpenAI non-profit --owns---> Holding Company for Open AI non-profit
That makes no sense. How does a non-profit "own" a holding company, that own the non-profit?
I don't know this guy in particular so I have no clue though.
Apple has a lot of cash to throw at it. Question would be if Apple is even interested in it.
What also happens is regular developers (like me) want the same treatment as if they could end-to-end deliver "if they only let me", but many times can't, and actually need the structure and processes of a team. I've seen this freedom not working at all.
If you're that much of a narcissist that you'd risk your health to look better, I need to take everything else you do and say with a big grain of salt.
Could a nuclear energy company be at escape velocity to fusion because they are the best at fission? I wouldn't think so
To a person who is not an expert at prompting LLMs, ChatGPT is basically a shitty version of Bing Chat (aka Copilot). Especially the free version - it's an outdated model which cannot search the internet (or does it strictly worse than Bing Chat).
Why does OpenAI pay for access to a shitty version of Bing Chat?
There's only one possible reason for this: raising money at a very high valuation. They burned through hundreds of millions dollars to show VC that they have 100+ M users (and growing rapidly!) to raise at valuation ~$100B.
The board did become un-boardable in any future company, but they are not resigning.
However, I would not be surprised if Microsoft take advantage of this unexpected situation for their gain.
They already shifted goal posts and they’ll do it again. AI used to mean AGI but marketing got a hold of it. Once something resembling AGI comes out they’ll say well it’s not Level 5 AGI or something similar.
Microsoft doesn't even own a majority stake in the for-profit, much less anything at all in the non-profit that ultimately controls everything.
Sounds like I did? I think this is kind of a bad take, no one is quitting OpenAI before Sam has even had 24 hours to process being fired to help him take on his revenge arc.
Instead it sounds like people are angered by the process and powers that lead to him being fired like that, which is extremely understandable given the history of OpenAI. People forget half of OpenAI's competitive advantage was just not letting themselves be mired in self-sabtoage, the exact kind their board just demonstrated today.
The truth is there isn't any strong evidence one way or another, only your existing worldviews.
Partnering with Microsoft and closing access both have profit-driven and ideology driven explanations, and we don't have strong evidence it's one and not the other.
We barely understand how consciousness works, we should stop talking about "AGI". It is just empty, ridiculous techno-babble. Sorry for the harsh language, there's no nice way to drive home this point.
Ironically some of us haven't gotten used to this yet either.
But it does look very weird. the full stop at the end of a capitalless sentence followed by an ellipsis is what sticks out to me, seems very tryhard.
Let's first define 'coding' before we jump into the details: 'coding' for me is sitting at your computer doing the work. It's not getting a coffee, chatting with a colleague, going to toilet or reading hacker news. So if you're reading this and claiming to do 100 hours per week of productive time, I call bullshit on that.
Being at the office for 60 to 100 hours, sure, I believe that.
When I was studying for exams at University, I did more than half of the work before noon. The rest was spread out over the afternoon and evening. At 20:00 my brain was dead. I could read a sentence, and nothing would stick. Read it again, impossible to process it.
So I always wondered how these other students could study until 2am in the morning. Well, turned out they didn't do shit in the morning. That's how they studied "all the way into the night".
Now back to my programming career: At my best I do 4 to 6 hours of concentrated coding per day. At my best, nobody seriously outperformed me. So if you claim to do more than x2 the work that I'm doing, I would love to see the output of that.
People like Cal Newport basically confirm what I've seen over the years. So do habits of the most famous authors.
Now, I can be convinced that it's actually possible. Take a look at Carmack, who claims to do 12 hours a day. He doesn't seem to be a bullshitter to me. So either he's counting time that I wouldn't count, like dungeon mastering a D&D game, or playtesting, or whatever. Or he's actually a super human work machine. Now he worked with Abrash, who seemed to do more sane hours. And in the end Carmack had high respect for the output of Abrash.
So yeah, if you know people who can actually do 14 hours of high concentrated coding 7 days out of 7, I would love to hear it and get some kind of confirmation that they're not browsing reddit and HN 50% of that time. And if you're reading this and claim to do 14 hours a day of concentrated work, I call bullshit on that you HN addict!
Parting ways with OpenAI might be the only option if the org remains firm on the direction it has chosen. Build internally to reach capability parity and then accelerate ahead of them while slowly rolling out of the agreement with OpenAI, reallocating those previously committed resources internally.
“Due to the actions of OpenAI’s board, Microsoft had no choice but defend its investment in this revolutionary technology.” The pr wire writes itself.
Microsoft has a minority stake in the for-profit subsidiary that is wholly controlled by the 501(c)(3). All investors (and employees) in the for-profit have to agree to the operating agreement that specifies the for-profit is not actually obligated to actually make a profit and that it is all secondary the charter the non-profit operates under.
https://openai.com/our-structure https://openai.com/charter
There is not a higher power than the board of the non-profit.
- an equalizer (entire team treated the same)
- a confidence booster (approval of others gives feeling of having done well)
- a way of distributing information (everyone is aware of all other team work)
You can run a team as a form of "competitive sport", and race everyone against each other; who churns out most "wins", and helpfulness, non-code-work, cross-team work are "distractors" to that objective hence undesirable and definitely not rewarded.
If the personalities in your team are "right" then this can work and by striving to best each other, all achieve highly. Have a single non-competitive person in there though... and it'll grate. Forcing a collaborative element into the work (whether by approval/review procedures, or by things like mentoring/coaching, or even just to force briefings to the team on project completion) creates a balance between the "lone crusaders" and the "power of the masses". Make the loners aware of, and contribute to, the concept of "team success", and give the "masses" insight into contributing factors of high individual performance.
First sentence on https://openai.com/about
Compare ChatGPT to a dog-- a dog's experience of an audible "sit" command maps to that particular dog's history of experience, manipulated through pain or pleasure (i.e. if you associate treat + "sit", you'll have a dog with its own grounded definition of sit). A human also learns words like "sit," and we always have our own understanding of those words, even if we can agree on them together too certain degrees in lines of linguistic corpora. In fact, the linguistic corpora is borne out of our experiences, our individual understandings, and that's a one way arrow, so something trained purely on that resultant data is always an abstraction level away from experience, and therefore from true grounded understanding or truth. Hence GPT (and all deep learning) unsolvable hallucination or grounding problems.
Personally, I find it much easier to get lost in time and focused when I am working on something challenging. Time just flies by.
If I have to work on something boring / routine / repetitive I find it much hard to focus and time goes by so slowly.
Then my brain decides to look for ways to automate what I am doing. Perhaps a DSL or .. or .. o .. No work, remember work, but I could hmm if I write a Perl script i, No work you need to work, but it woud be work if i cold only
(I am diagnosed with ADHD)
See: >>38312704
The topic at hand is “how did a high level engineer got to focus on programming”. And I am saying that the reason has to do more with his influence and role within the organization, rather than other reason.
It's completely illogical to say that just because an AI company isn't open source it can't strive to also be "safe".
In the world where Sam Altman leads OpenAI to market dominance and eventual acquisition by Microsoft for $400B or whatever, he obviously would represent an important part of what Microsoft would be buying and would be compensated accordingly.
Presumably it would require enormous startup capital?
And Microsoft are risk adverse enough that I think they do care about AI safety, even if only from a "what's best for the business" standpoint.
Tbh idc if we get AGI. There'll be a point in the future where we have AGI and the technology is accessible enough that anybody can create one. We need to stop this pointless bickering over this sort of stuff, because as usual, the larger problem is always going to be the human using the tool than the tool itself.
Maybe not the individual users, but the enterprises/startups which builds around OpenAI.
We can debate whether or not that was wise of them, but because of the charter and structure of OpenAI it was never on the table.
Since Shank's comment didn't specify what they meant, I should have made a more charitable interpretation (i.e. assume it was "weak AGI").
For a certain definition of "software": when only doing one training run costs an 8 digits sum (requiring hardware one order of magnitude more expensive than that to run) I kinda dispute the "all they do is software".
It's definitely not "all software": a big part of their advantage compared to actually free and open models is the insane hardware they have access to.
The free and open LLMs are doing very well compared to OpenAI once you take into account that the cost to train them is 1/100th to 1/1000th what it costs to train the OpenAI models.
This can be seen with StableDiffusion: once the money is poured in training the models and then the model made free, suddenly the edge of proprietary solutions is tiny (if it even exists at all).
I'd like to see the actually open and free models trained on the hardware used to train OpenAI: then we'd see how much of a "software edge" OpenAI has.
And my guess is it'd be way less impressive than you make it out to be.
Also, I hope my response to tempestn clarifies a bit more.
Edit: I'll be more explicit by what I mean by "nuance" — see Stuart Russell. Check out his book, "Human Compatible". It's written with cutting clarity, restraint, thoughtfulness, simplicity (not to be confused with "easy"!), an absolute delight to read. It's excellent science writing, and a model for anyone thinking of writing a book in this space. (See also Russell's principles for "provably beneficial AI".)
I understand that sometimes is worth it, to create a great product, solve something important or just for fun. But beware
Board had to act fast to fix it. And OpenAI changed enterprise pricing of API to be up front for cashflow related to that.
If I know what to write, and I just have to crunch out pretty straightforward code, I can do more hours (nowhere near 12 hours though, maybe 8 at best).
I can imagine the work your dad did, didn't include juggling a big complex system in his head, which seems to require a lot of mental energy.
That's basically also what Carmack states, that you can reach 12 hours if you plan your work to include some easier tasks for that day. But then again, I was never able to really apply that strategy.
Thanks for you take on it! :)
So basically, my brain is lazy and try to find a way to keep it in that state.
> Once you have their money, you never give it back.
There is no official rule 2 so the non-cannon one is as good as any and the unwritten rule [2]:
> When no appropriate rule applies, make one up
Means they probably would have been covered either way.
[0] https://memory-alpha.fandom.com/wiki/Rules_of_Acquisition#Ap...
[1] https://memory-alpha.fandom.com/wiki/Rules_of_Acquisition#Of...
[2] https://memory-alpha.fandom.com/wiki/Rules_of_Acquisition#Un...
Like, sure you’d be able to live off your savings if you were allowed to stay in the country, but most countries have a short limit on a tourist visa and then you’d need to leave…
What I’m trying to say is that it is an addiction like any other and should be treated as such, not glorified.
It would seem like you're talking about what "software edge" OpenAI has in the future, when others have caught up, while parent is talking about the existing "software edge" OpenAI has today, which you seem to implicitly agree with, as you're talking about OpenAI maybe not having any edge in the future.
All of a sudden their amnesia stopped huh?
Hypocrites and virtue signallers, the whole board.
They may be geniuses, but AGI is an idea whose time has come: geniuses are no longer required to get us there.
The Singularity train has already left the station.
Inevitability.
Now humanity is just waiting for it to arrive at our stop
You will see the distance to be travelled and say let's build a airplane.
but incentives in most companies demand "progess" hence most projects start by piling the car high and driving off. it's when they are attaching floats to the car and paddling across the atlantic shouting progess reports back to shore that the value of automation comes to mind
don't worry about the ADHD - embrace it. (my hint - of the boring has to be done, make it the only thing, have nothing else).
In relation to other comments here. There is "coding" and there is "God's spark genius of algorithms" kind of work. This is what made the magic of OpenAI. Believe me, those guys were not "just coding". My bet is that it could be all about some research directions that were "shielded" by Sam.
I've started a directory of GPTs on gipety.com so personally the longer it takes for the official store to be launched the better ;)
I think AGI is going to arrive via a different technology, many years in the future still.
LLMs will get to the point where they appear to be AGI, but only in the same way the latest 3D rendering technology can create images that appear to be real.
This combined with it being possible with DNA is a very rare view. How did you come by it?
It is the reason why CEOs (not Sam, apparently) are usually compensated in stock options. Golden parachutes are some sort of severance when the CEO gets fired immediately from the new company, eg. Twitter.
He's totally the guy who made OpenAI into ClosedAI, but money was clearly not his motivation.
Because most 10x engineers recognized by management as such are characterized chiefly by building out shoddy software extremely quickly that only they can understand.
In a similar dynamic, Doctors that are scored highly by patients often have pretty bad medical outcomes.
I know that lots of people have personal stories of using ChatGPT but I was hoping something publicly reported on or like a showcase of truly impressive usages somewhere.
Monetary: Promotions and payouts (now or future)? Equity is not the only way
I think the path to AGI is: embodiment. Give it a body, let it explore a world, fight to survive, learn action and consequence. Then AGI you will have.
As far as I can tell, all three of them are of Polish descent. For all we know they might have decided to resign together even if only one of them had a personal issue with OpenAI's vision. We will find out soon enough whether they will just found their own competing startup, based on OpenAI's "secret sauce" or not.
It's like a real-life example, i.e. what would you do if you were in the CEO's position?
It’s ok to not enjoy it yourself. Different strokes for different folks.
I don’t think it should be culturally championed but I don’t see it as an immediate red flag especially in the case of a bleeding edge company like OpenAI.
I asked him whether as a boy he had speculated much about his gift. Had he asked himself why he had this special power? Why he was so bright?
Dyson is almost infallibly a modest and self-effacing man, but tonight his eyes were blank with fatigue, and his answer was uncharacteristic.
“That’s not how the question phrases itself,” he said. “The question is: why is everyone else so stupid?”
Nothing to do with being Polish in particular. Only that there is a connecting element that might help explain why these 3 decided to resign together on the same day.
For example, i have an assistant which is supposed to parse an uploaded file and extract useful info from it. To use this assistant, I create a thread and a run and attach it to the assistant with a different file-id. About half the time, the assistant simply throws up its hands and says it can’t parse the file I supplied with the thread. Retrying a few times seems to do the trick.
I do not wanna be flippant here: Obviously having easy access to money and a good standing with the right people is making things A LOT simpler, but other people could have reasonably convinced someone to give them money to built the same software. That's what VCs do, after all.
Regarding the rest: Feels very much like a different topic. I'll pass.
In all three languages I frequently use (Common Lisp, Python, and Racket) it is easy to switch between APIs. You can also use a library like LangChain to make switching easier.
For people building startups on OpenAI specific APIs, they can certainly protect themselves by using Azure as an intermediary. Microsoft is in the “stability business.”
I really don't buy that for a second. Most of OpenAI's value compared to any competitor comes from the money they spent hiring humans to trawl through training data.
I’m not making any judgement about whether they made the correct decision. I’m just stating that everyone keeps talking about this as if it were a normal company structure and it absolutely is not.
Everyone seems to have lost their mind missing this point.
Management and leadership of a team has a way bigger impact than any single individual contributor could ever have. Humans are generally limited not by intelligence but by motivation and vision. Directing people to achieve what you want is what allows the scaling of innovation.
Hero worship is a very human thing, but unscientific.
Technically speaking, only because PR has been replaced by ChatGPT :-)
I think people seriously underestimate how hard it is to get the GPU compute/etc that is necessary to be useful here. Lead time would be years, easily, even if you had the money. NVidia can't change this for you even if they liked you - they literally can't build chips fast enough.
Depending on the exact agreement, Microsoft may have just given them credits/free use, and the part where they make sure the resources are actually available is just good faith that may no longer exist.
That's one example.
Even beyond that, you are assuming they can only do super-direct things, but it turns out to be fairly easy to make things very uncomfortable for people/companies indirectly.
I suspect AGI is quite possible, it just won't be what everyone thinks it will be.
Do you know if there are any projects working on this? Even something like a high quality json-tuned base model would go a huge way toward replicating OpenAI's current product.
If money was his motivation, why wouldn't he spend his time building a company like that in the first place? As the head of YC, I don't think he would've had any trouble raising for anything, even prior to OpenAI.
Also, it's not really parent's "point" as you claim; they quite explicitly talk about compensation in case of acquisition.
They just hadn't -- and still haven't -- figured out how to commercialize it yet. I don't think they'll be the ones to crack that nut either. IMO they are too obsessed with "safety" to release something useful, and also can't reasonably deploy a service like ChatGPT at their scale because the costs are too high.
With OpenAI imploding this whole race just got a lot more interesting though...
A similar (but even more complicated) case is currently going on over at Sculptor Capital Management, where former management is suing current management because they have chosen to go with a "worse" acquisition deal that would let current management stay on-board. This is despite shareholder approval to the "worse" deal. https://www.pionline.com/hedge-funds/ex-sculptor-executives-...
In fact, to prevent this situation is exactly why golden parachutes exist.
It's also totally unproportional compared to what Sam would've gotten if he owned equity.
Bard was likely not trained on copyrightable data, that makes it safe from lawsuits but also removes most of the usecases people want ChatGPT for.
And it isn't just about lawsuits, since Google need to keep advertisers happy or they would leave like they leave Elon Musk they can't afford to jeapordise that with questionable launches.
Sam also talked about the dangers of AI. It’s likely that he did so to encourage a regulatory moat around OpenAI.
I’ve seen this expression a lot recently and it baffles me.
The word you are looking for is “effort,” or if you prefer adjectives, maybe something like “difficult.”
Most humans cannot write as well and most lack the reasoning ability. Even the mistakes chargpt makes on mathematical reasoning is typical human behavior.
Well here comes AI to take those jobs. What happens, you think? Where do we go next? Do you imagine we'll all just sit idle and give out orders for the AI to fulfill? Recall: the system throws away parts that are not useful. And we're not better at orchestrating this system than we are at implementing it. Most people already struggle to handle the complexity of modern life. So they'll be thrown out.
Now think what happens with a society where most people are unemployed, unhappy and hungry, and businesses are mostly, not ENTIRELY mind you, but most self-sufficient machinery that does the thinking and does the footwork?
But even that doesn't describe the problem alone. It's more of an end game. Before this we'll see not-so-superior AI pollute our web, media, public space with quickly generated content, as actual artists and thinkers are displaced, unable to compete. Our culture will die first. And then, eventually, we'll start dying.
As I'm describing this, note I don't say this from place of fear. I don't fear this. I see it more as an obvious place for our civilization to go. We can't help it, because we don't decide where this civilization goes any more than your cells decide where you go, or any more than the atoms of your cells "decide" where the cell goes.
We're not in control. That's just evolution.
Say, when you're sick and you have cancer, those cells are part of you, but they harm you, so you cut them out, apply chemotherapy, and then if there's a prosthesis to substitute the organ you removed with a machine, you do it, and you don't think twice about it.
What makes you think our society as a whole is different? If humans are not good at what society needs, it cuts those people out, and replaces them with working machines. It's so plainly obvious. We pay lip service to human rights and the value of an individual, but clearly that's not what we end up doing. A politician is chasing money and power, and they don't mind starting wars to get them if they can. A business chases profit, so they don't mind automating away any employee they can. It's always been this way. So now that you can replace the human thinkers, businesses won't need human thinkers. And since there's nothing left humans are good at, society won't need humans.
Distinction without difference
1) Moves fast, flexes their authority to sweep small stuff under the rug until it is out of scope and can be "fixed real quick" later. Often leverages many subject matter experts through effective and persistent communication and learns quick enough to get PRs through the door (that sometimes need "quick" fixes later). Enjoys selecting items that benefit their career the most, at the expense of others on their team. Mentors only enough to onboard and increase his team's yield, not to aid their careers. Fueled by the recognition and validation of peers through PR/project completion.
2) Gets shit done, is the SMI themself. Solo code cannon, but PRs go in clean, beautiful to look at. May not get along well with some but not necessarily abrasive to work with especially being part of their direct team. Can be a great altruistic mentor if they spare 5% of their time. Enjoys what they do, and the technologies they work with. Fueled by personal satisfaction in their achievements, and in uplifting their team.
Huge portions of the compute they are using is also directly doing inference for Microsoft products, so that's another dimension to all of this.
You also have to remember that ultimately Microsoft had to sign an operating agreement that states that the primary duty of the for-profit is the mission and charter of the non-profit and that all other things are secondary to that.
Not that any of that makes Satya happy, but it does severely limit his options outside of cutting off his nose to spite his face. I think the primary outcome is in significantly reducing the chances Microsoft continues to invest.
There's rapidly becoming more and more competitors in this space, as well. OpenAI has a significant first-mover advantage, but I don't know that it is insurmountable, and I doubt investors are confident that it is either. That means they're even less likely to have infinite runway.
So I'm not sure there's personal/moral success at this point in the story for the board to begin with.
Monetary - 3/4ths the board is independent. They are not actually employed by OpenAI. There's nothing to promote them to, and nothing in the charter of the non-profit that would give them payouts.
They are? Here's my submission of your link.
Maybe @dang can put it under your name?
All research that I have seen disagrees with this take. Ask GPT-4 a few basic block world problems and see for yourself if it can match human ability.
Have you ever tried to make a Python program to do exactly that?
I only used couple of sentences to build my prompt...
Surely all addicts love the thing they’re addicted to, but that doesn’t make it ok, even in the case where their addiction doesn’t ruin their lives short or mid term.
https://innovationorigins.com/en/openai-and-googles-bard-acc...
The OpenAI 501(c)3 already spun up a for-profit company in 2019 to do all the commercial work and take VC money.
For profit will always win non-profit because It’s run by people - once they see money , they want more not less.
This was meant to happen when it changed its structure to cap profits at 100x . If Microsoft invests $10 billion openAI Can make $1 trillion in profits before giving a dime to its non profit. If it makes $1 trillion it can just ask someone to invest $1 billion and wait until it makes another $100 billion .
Sam, Brockman and others who resigned will start a new for-profit company. Sam will use his influence to get funding and contracts. They know the tech and with know how of everything, build better model. And OpenAI will be forced to give up and work with them as it’s part of their founding policy.
For very profitable things. This isn't very profitable, which is why I added that part to my comment. Google has a very good understanding what they get sued for and how much those lawsuits costs, if it is profitable anyway they go ahead.
So many people overstate state their ideas, which is basically deceit. People who represent guesses as fact are either lying or have a special kind of arrogant ignorance.
In short, either they didn't or where unable to create a favorable enough environment for this to flourish.
Scaling of training was the challenge back then (of course).
Google was already too corporate. Please remember that Sergey Brin and Larry Page were no longer at the steering wheel back then. I have been told that it was also a cultural issue linked to "delivering brilliance". Simplifying: Google promoted tiny teams or individual contributors building things that had to become a massive success quickly. Open AI took a number of hand picked brilliant people and let them work together on a common goal, silently, for quite some time.
Some companies just have an unfair advantage. A certain magic. And OpenAI's magic is at risk right now.
And those which are carried in our pockets are no longer capable of being home brewed.
Lol try giving it any of the puzzles from here: https://momath.org/home/varsity-math/complete-varsity-math/
Don’t just accept its confident tone, read through and actually parse the logic. It totally falls apart.
Human behavior is highly optimized to having a meat based shell it has to keep alive. The vast majority of our behaviors have little to nothing to do with our intelligence. Any non-organic intelligence is going to be highly divergent in its trajectory.
I'm just giving an example of where they do have leverage if they want to use it despite the cost.
What is intelligence?
This is a nearly impossible question to answer for human intelligence as the answer could fill libraries. You have subcellular intelligence, cellular level intelligence, organ level intelligence, body systems level intelligence, whole body level intelligence, then our above and beyond animal level intellectual intelligence.
These are all different things that work in concert to keep you alive and everything working, and in a human cannot be separated. But what happens when you have 'intelligence' that isn't worried about staying alive? What parts of the system are or are not important for what we consider human intelligence. It's going to look a lot different than a person.
For example, you're limited to one body, but an A,G|S,I could have thousands of different bodies feeding back data to a processing facility, learning from billions of different sensors.
It does not and never has.
What happens with the term AI as time has progressed is more to do with the word Intelligence itself. When we went about trying to prescribe intelligence to systems we started to realize we were really bad a doing the same with animal humans systems. We were also terrible at separating what is component level and systems level intelligence. For example, you seem to think that intelligence requires meat, but you don't give any reasoning for that conclusion.
These lists of problems with what intelligence is will get worse over time as we build more capable systems and learn about new forms of intelligence we didn't expect possible.
And for my single query above, ChatGPT searched multiple sources, aggregated the results, and offered a summary and recommendations, which is a lot more than Google would have done.
ChatGPT's major current limitation is that it just refuses to answer certain questions [what is the email address for person.name?] or gets very woke with some other answers.
There is no reason embodiment for AGI should need to be physical or mammalian-like in any way.
She/he/it/them is an amazing programming tutor.
Yes, it’s a strategy that would/did require—among other chancey things—Altman to make a big bet on himself rather than OpenAI on him.
I think they literally could get dump trucks full of cash.
I see no reason why they would of course.
Still as far as I understand and I fully admit that Is not a lot, OpenAI is running on a lot of MS cloud, so they would (probably ) ot be able to offer OpenAI² enough compuete in the near future.
I doubt anyone could, but if it became a need I am certain every at scale provider would be doing their upmost to win the business.
You’re talking about hype cycles now. Previously it seemed like you said AI was not going to be advancing.
LLMs are maybe headed into oversold territory, but LLMs are not the end of AI, even in the near term. They are just the UI front end.
I read/hear sentences like this all day at work and I’ve taken to just interpreting them literally. So I’ll have you know I’m neither exercising nor on an elevator right now.
Like, why would he not take equity, but instead rely solely on an acquisition?
I don't know your CS background but perhaps I do not view the terms "complex" and "tedious" the way you assume. A tedious parser is certainly tedious to write, but it is not (necessarily) complex. And from an engineering standpoint it is questionable that you lost all the formatting information from Word, which would have already demarcated what things were headers, code, and so forth. So, you had to use a roundabout way—an LLM—to recover that information from the semantics.
If what you're really arguing is that ChatGPT works well for language translation tesks, in this case translating mixed prose, code, and foreign languages--sure I guess that's great at productivity and removing tedium, but it's not that surprising a usage given what LLMs are. They are language translators.
In other words you're saying it's complex but your argument reduces a task that is straightforward but tedious for humans, to the problem complexity of natural language processing.
But I disagree about a human or animal body not being required.
I think we have to take the world as we see it and appreciate our own limitations in that what we think of intelligence fundamentally arises out of our evolution in this world; our embodiment and response to this world.
so I think we do need to give it a body and let it explore this world.
I don’t think the virtual bodies thing is gonna work. I don’t think letting it explore the Internet is gonna work. you have to give it a body multiple senses let it survive. That’s how you get AGI, not not virtual embodiment. Which I never meant, but thought it was obvious given the term embody minute self strongly, suggesting something that’s not virtual! Hahaha ! :)
But there is so much more than what we can consciously describe, to reality, like 10,000 to 1 — and none of that is captured by any of these synthetic representations.
so far. and yet all of that is or a lot of that is understood, responded to and dealt with by the intelligence that resides within our bodies and in our subconscious.
And our own intelligence arises out of that, you cannot have general intelligence without reality. No matter how much data you train it on from the Internet. It’s never gonna be as rich as for the same as putting it in a body in the real world, and letting them grow learn experience and evolve. And so any air quotes intelligence you get out of this virtual synthetic training is never going to be real. Itis always gonna be a poor copy of intelligence and is not gonna be an AGI.
Intelligence is not the defining characteristic of humanity, which is what you're getting at here. But it is something that can be automated.
Even your "looking for" is a metaphor since you technically can't "look" for words (except as a metaphor for literally reading in a dictionary?) but we all know exactly what you mean. Moreover, if we trimmed language down to a minimal set and always used extremely precise meaning that might be an even worse experience than the "corporate speak" you're frustrated by.
Maybe you can redirect your anger to the part of corporate speak that I personally find annoying which is not the phrases per se but the propensity for using lots of words to say very little and to avoid directly taking responsibility for things. Let's put a pin in that one for now though and get something on the calendar to hash that out so we can get on the same page and circle back when we have a better bird's eye view on the action items and the right person to be decider :)
On the other hand you could take up loglan/lojban and maybe end up happier? Especially if it resulted in fewer meetings and managers.
This rule should be dropped. It's the reason that HN is dominated by low quality outlets. It's also in tension with another rule, which is that the original source of the article should be posted - the original source is usually an outlet that has the resources to do original research, thanks to a paywall.
I pay for ChatGPT, and I care.
What percentage of users, and how many in absolute numbers is a matter of debate, but this nonsense (and it is nonsense) is antithetical to building a strong trusting relationship with AI. At the very least it's as antithetical to their mission.
If we take a step back, the benchmark now is to be actually transparent. Radically transparent. Like when Elon purchased Twitter and aired all the dirty laundry in the Twitter Files transparent. The cowards at OpenAI hiding behind lawyers advising them of lawsuits are just that, cowards. Leaders stand by their principles in the darkest of times, regardless of whatever highfalutin excuses one could hide behind. It's pathetic and embarrassing. A lawsuit at a heavily funded tech startup at this level is not even a speeding ticket in the grand scheme of things.
95%+ of tech startup wisdom from the last decade is completely irrelevant now. We're living in a new era. The idea people will forget this in a month doesn't hold for AI. It holds for food delivery apps, not AI tech the public believes (right or wrong) might be an existential threat to their prosperity and economic future.
The degree of leadership buffoonery taking place at OpenAI is not acceptable and one must be genuinely stupid to defend it. Everyone involved should resign if they have any self-respect.
My prognostication is the market will express it's displeasure in the coming weeks and months, setting the tone for everyone else going forward. How the hell is anyone supposed to trust OpenAI after this?
I don’t like nonsense PR stories or myths about people’s extraordinary prowess.
I just respond badly to BS and these statement have obvious BS if you stop for even a second to think about them.
On some level too, it offends me when I see right minded intelligent people in my community lapping it up.
So a couple of things.
Say I were to tell you that he was the President of openAI but he also did 80 hours of janitorial work per week.
Would you say that was a good use of his time?
Would you say that maybe he should be spending his time on being president of the company and not mopping up? You would be right.
Now substitute programming for janitorial work.
Now be a little more critical about things you see online.
https://www.cnbc.com/2017/04/21/why-apple-co-founder-steve-w...
My take: He’s the Keanu Reeves of tech (or Keanu is the Woz of the film industry). The world can use more of this.
I use it every day, and I have to often guide it like a 5 year old to come to the conclusion to help me the way I want it to.
>GPT 4 can reason through any mental exercise as well as a human.
So can my alcoholic neighbor. That should not be a benchmark of anything.
https://www.bloomberg.com/news/articles/2023-11-19/openai-ne...
OpenAI Negotiations to Reinstate Altman Hit Snag Over Board Role
OpenAI’s leaders want board removed, but directors resisting
Microsoft’s Nadella leading high-stakes talks on Altman return
Sam AltmanPhotographer: Joel Saget/AFP/Getty Images Have a confidential tip for our reporters? Get in Touch Before it’s here, it’s on the Bloomberg Terminal LEARN MORE By Emily Chang, Edward Ludlow, Rachel Metz, and Dina Bass November 20, 2023 at 7:17 AM GMT+11 Updated on November 20, 2023 at 7:47 AM GMT+11
A group of OpenAI executives and investors racing to get Sam Altman reinstated to his role as chief executive officer have reached an impasse over the makeup and role of the board, according to people familiar with the negotiations. The decision to restore Altman’s role as CEO could come quickly, though talks are fluid and still ongoing.
At midday Sunday, Altman and former President Greg Brockman were in the startup’s headquarters, according to people familiar with the matter.
OpenAI leaders pushing for the board to resign and to reinstate Altman include Interim CEO Mira Murati, Chief Strategy Officer Jason Kwon and Chief Operating Officer Brad Lightcap, according to a person with knowledge of the discussions. foundering_tout
Altman, who was fired Friday, is open to returning but wants to see governance changes — including the removal of existing board members, said the people, who asked not to be identified because the negotiations are private. After facing intense pressure following their decision to fire Altman Friday, the board agreed in principle to step down, but have so far refused to officially do so. The directors have been vetting candidates for new directors.
At the center of the high-stakes negotiations between the executives, investors and the board is Microsoft Corp. CEO Satya Nadella. Nadella has been leading the charge on talks between the different factions, some of the people said. Microsoft is OpenAI’s biggest investor, with $13 billion invested in the company.
and also:
let's all jump on a call, set kras so we stay on the ball, up our team work to get that perk, get our messaging right, so the kpi chart goes up and to the right.
go team! play ball!
The best they can do is out-bid their competitors, for the competitors hardware. I'm sure apple doesn't want to pay Google for GCP resource to train an AI. Again, there may not be enough companies renting out GPUs at all.
It only appears like that because PR writing has become careful and systematic, which is the kind of writing ChatGPT does very well.
Plenty of very intelligent people are completely paralyzed. Sensations of physical embodiment is highly overrated and is surely not necessary for intelligence.
If it does have considerable training data including prompt and response when people are interacting with itself then I suppose it isn't that surprising.
That does sound like self awareness, in the non magical sense. It is aware of its own behaviour because it has been trained on it.
OK now back to my 12 hour day. Not burnt out yet so I'm going to keep going. And yes, I LIKE IT!