OpenAI is now just a vehicle to commercialize their LLM - and everything is subservient to that goal. Discover a major flaw in GPT4? You shut your mouth. Doesn’t matter if society at large suffers for it.
Altman's/Microsoft’s takeover of the former non-profit is now complete.
Edit: Let this be a lesson to us all. Just because something claims to be non-profit doesn't mean it will always remain that way. With enough political maneuvering and money, a megacorp can takeover almost any organization. Non-profit status and whatever the organization's charter says is temporary.
I mean it is what they want isn't it. They did some random stuff like, playing dota2 or robot arms, even the Dalle stuff. Now they finally find that one golden goose, of course they are going to keep it.
I don't think the company has changed at all. It succeeded after all.
If you worked for old OpenAI, you would be free to talk about it - since old OpenAI didn't give a crap about profit.
Altman's OpenAI? He will want you to "go to him first".
Did they company change? I am not convinced.
Iirc, the NP structure was implemented to attract top AI talent from FAANG. Then they needed investors to fund the infrastructure and hence gave the employees shares or profit units (whatever the hell that is). The NP now shields MSFT from regulatory issues.
I do wonder how many of those employees would actually go to MSFT. It feels more like a gambit to get Altman back in since they were about to cash out with the tender offer.
Unfortunately, in the past few days, the only thing they've accomplished is significantly damaging their brand.
In fact this observation is pertinent to the original stated goals of openAI. In some sense companies and organisations are superinteligences. That is they have goals, they are acting in the real world to achieve those goals and they are more capable in some measures than a single human. (They are not AGI, because they are not artificial, they are composed of meaty parts, the individuals forming the company.)
In fact what we are seeing is that when the superinteligence OpenAI was set up there was an attempt to align the goals of the initial founders with the then new organisation. They tried to “bind” their “golem” to make it pursue certain goals by giving it an unconventional governance structure and a charter.
Did they succeed? Too early to tell for sure, but there are at least question marks around it.
How would one argue against? OpenAI appears to have given up the lofty goals of AI safety and preventing the concentration of AI provess. In their pursuit of economic success the forces wishing to enrich themselves overpowered the forces wishing to concentrate on the goals. Safety will be still a figleaf for them, if nothing else to achieve regulatory capture to keep out upstart competition.
How would one argue for? OpenAI is still around. The charter is still around. To be able to achieve the lofty goals contained in it one needs a lot of resources. Money in particular is a resource which enables one greater powers in shaping the world. Achieving the original goals will require a lot of money. The “golem” is now in the “gain resources” phase of its operation. To achieve that it commercialises the relatively benign, safe and simple LLMs it has access to. This serves the original goal in three ways: gains further resources, estabilishes the organisation as a pre-eminent expert on AI and thus AI safety, provides it with a relatively safe sandbox where adversarial forces are trying its safety concepts. In other words all is well with the original goals, the “golem” that is OpenAI is still well aligned. It will achieve the original goals once it has gained enough resources to do so.
The fact that we can’t tell which is happening is in fact the worry and problem with superinteligence/AI safety.
They did, and fell, IIRC, vastly short (IIRC, an order of magnitude, maybe more) short of their minimum short-term target. The commercial subsidiary thing was a risk taken to support the mission because it was clear it was going to fail from lack of funding otherwise.
It doesn't matter if OpenAI or AWS or GCP encoded the entire works of Shakespeare in their LLM, they can all write/complete a valid limerick about "There once was a man from Nantucket".
I seriously doubt AWS is going to license OpenAI's technology when they can just copy the functionality, royalty free, and charge users for it. Maybe they will? But I doubt it. To the end user it's just another locally hosted API. Like DNS.
They still have gpt4 and rumored gpt4.5 to offer, so people have no choice but to use them. The internet has such short an attention span, this news will get forgotten in 2 months
what's keeping people with OpenAI for now is that chatGPT is free and GPT3.5 and GPT4 are the best. over time I expect the gap in performance to get smaller and the cost to run these to get cheaper.
if google gives me something close to as good as OpenAI's offering for the same price and it pull data from my gmail or my calendar or my google drive then i'll switch to that.
Ok, the first few times you use it maybe it's good to know it doesn't think it's a person, but short and sweet answers just save time, especially when the result is streamed.
The efficiency of finetuned models is quite, quite a bit improved at the cost of giving up the rest of the world to do specific things, and disk space to have a few dozen local finetunes (or even hundreds+ for SaaS services) is peanuts compared to acquiring 80GB of VRAM on a single device for monomodels
Languages other than English exist, and RLHF at least does work in any language you make the request in. regex/nlp, not so much.
Next up would be an EVE corp run entirely by LLMs
What do you mean? It recommends things that it thinks people will like.
Also I highly suspect "Altman's OpenAI" is dead regardless. They are now Copilot(tm) Research.
They may have delusions of grandeur regarding being able to resist the MicroBorg or change it from the inside, but that simply does not happen.
The best they can hope for as an org is to live as long as they can as best as they can.
I think Sam's 100B silicon gambit in the middle east (quite curious because this is probably something the United State Federal Government Is Likely Not Super Fond Of) is him realizing that, while he is influential and powerful, he's nowhere near MSFT level.
If it's really valuable to society, it needs to be a government entity, full stop.
More likely, they're a loss-leader and generating publicity by making it as cheap as possible.
_Everything_ we've seen come out of silicon valley does this, so why would they suddenly be charging the right price?
I thought the was a somewhat clear agreement that openAI is currently running inference at a loss?
People use "the chatbot from OpenAI" because that's what became famous and got all the world a taste of AI (my dad is on that bandwagon, for instance). There is absolutely no way my dad is going to sign up for an Anthropic account and start making API calls to their LLM.
But I agree that it's a weak moat, if OpenAI were to disappear, I could just tell my dad to use "this same thing but from Google" and he'd switch without thinking much about it.
Quake and Counter-Strike in the 1990s ran like garbage in software-rendering mode. I remember having to run Counter-Strike on my Pentium 90 at the lowest resolution, and then disable upscaling to get 15fps, and even then smoke grenades and other effects would drop the framerate into the single digits. Almost two years after Quake's release did dedicated 3d video cards (voodoo 1 and 2 were accelerators, depended on a seperate 2d VGA graphics card to feed it) begin to hit the market.
Nowadays you can run those games (and their sequels) in the thousands (tens of thousands?) of frames per second on a top end modern card. I would imagine similar events with hardware will transpire with LLM. OpenAI is already prototyping their own hardware to train and run LLMs. I would imagine NVidia hasn't been sitting on their hands either.
You mean like they already do on Amazon Bedrock?
So far we don't have any open source models that are close to GPT4, so we don't know what it takes to run them for similar speeds.