ChatGPT is the number 1 brand in AI and as such needs to learn what it's selling, not how its technology works. It always sucks when mission and vision don't align with the nerds ideas, but I think it's probably the best move for both parties.
I'm not as in tune as some people here so: don't they need both? With the rate at which things are moving, how can it be otherwise?
They have a strong focus on making the existing models fast and cheap without sacrificing capability which is music to the ears of those looking to build with them.
That’s not true at all. The biggest issue is that it doesn’t work. You can’t actually trust ai systems and that’s not a product issue.
Pretty weak take there bud. If we just look at the Gartner Hype Cycle that marketing and business people love so much it would seem to me that we are at the peak, just before the downfall.
They are hyping hard to sell more, when they should be prepping for the coming dip, building their tech and research side more to come out the other side.
Regardless, a tech company without the inventors is doomed to fail.
But this race to add 'AI' into everything is producing a lot of nonsense. I'd rather go fullsteam ahead on the science and the new models, because that is what will actually get us something decent, rather than milking what we already have.
Meanwhile, OpenAI (and the rest of the folks riding the hype train) will soon enter the trough. They're not diversified and I'm not sure that they can keep running at a loss in this post-ZIRP world.
I see this coming for sure for open ai and I do my part by just writing this comment on HN.
The AI isn’t the product, e.g. the ChatGPT interface is the main product that is layered above the core AI tech.
The issue is trustworthiness isn’t solvable by applying standard product management techniques on a predictable schedule. It requires scientific research.
I don't know about that, it seems to work just fine at creating spam and clone websites.
Not for long. They have no moat. Folks who did the science are now doing science for some other company, and will blow the pants off OpenAI.
Things have been moving fast because we had a bunch of top notch scientists in companies paired with top notch salesmen/hype machines. But you need both in combination.
Hypemen make promises that can't be kept, but get absurd amounts of funding for doing so. Scientists fill in as many of the gaps as possible, but also get crazy resources due to the aforementioned funding. Obviously this train can't go forever, but I think you might understand that one of these groups is a bit more important than the other while one of these groups is more of a catalyst (makes things happen faster) for the other.
For a lot of (very profitable) use cases, hallucinations and 80/20 are actually more than good enough. Especially when they are replacing solutions that are even worse.
From a business point of view, you don't want to be first to market. You want to be the second or third.
Producing spam has some margin on it, but is it really very profitable? And else?
Google or Meta (don't remember which) just put out a report about how many human-hours they saved last year using transformers for coding.