We've already trained it on all the data there is, it's not going to get "smarter" and it'll always lack true subjective understanding, so the overhype has been real, indeed to bubble levels as per OP.
What is your basis for those claims? Especially the first one; I would think it's obvious that it will get smarter; the only questions are how much and how quickly. As far as subjective understanding, we're getting into the nature of consciousness territory, but if it can perform the same tasks, it doesn't really impact the value.
AI is as real as the mobile/internet/pc revolution of the past.
So many use it obsessively every single day.
Sam & Greg could start a new AI company by Monday and instantly achieve unicorn valuation. Hardly a burst.
That's got to be worth something, since Alphabet is a $1.7T company mostly on the strength of ads associated with Google search.
I agree, it's greatly undervalued!
Again - to each their own. But what people use google for GPT doesn’t replicate anyway (and what the Google business was built on) - which is commercial info retrieval.
I haven't finished making up my mind, the the AI's are doing OK. I have only been asking it for code snippets that are easily verifiable.
For the majority of my use of ChatGPT and Google, I need to be able to get useful answers to vague questions - answers that I can confirm for myself through other means - and I need to iterate on those questions to hone in on the problem at hand. ChatGPT is undoubtedly superior to Google in that regard.
Look, I'm an AGI/AI researcher myself. I believe and bleed this stuff. AI is here to stay and is forever a part of computing in many ways. Sam Altman and others bastardized it by overhyping it to current levels, derailing real work. All the traction OpenAI has accumulated, outside of github copoilot / codex, is itself so far away from product-market fit that people are playing off the novelty of AGI / the GPT/AI being on its way to "smarter" than human rather than any real usage.
Hype in tech is real. Overhype and bubbles are real. In AI in particular, there's been AI winters because of the overhype.
It's a bubble if the valuation is inflated beyond a level compared to a reasonable expected future value. Usefulness isn't part of that. The important bit is 'reasonable', which is also the subjective bit.
1) we should agree on what we mean by smart or intelligent. That's really hard to do so let's narrow it down to "does not hallucinate" the way GPT does, or more high level has a subjective understanding of its own that another agent can reliably come to trust. I can tell you that AI/deep learning/LLM hallucination is a technically unsolvable problem, so it'll never get "smarter" in that way.
2) This connects to number 2. Humans and animals of course aren't infinitely "smart;" we fuck up and hallucinate in ways of our own, but that's just it, we have a grounded truth of our own, born of a body and emotional experience that grounds our rational experience, or the consciousness you talk about.
So my claim is really one claim, that AI cannot perform the same tasks or "true" intelligence level of a human in the sense of not hallucinating like GPT without having a subjective experience of its own.
There is no answer or understanding "out there;" it's all what we experience and come to understand.
This is my favorite topic. I have much more to share on it including working code, though at a level of an extremely simple organism (thinking we can skip to human level and even jump exponentially beyond that is what I'm calling out as BS).
Google search reminds me of Amazon reviews. Years ago, basically trustworthy, very helpful. Now ... take them with a tablespoon of salt and another of MSG.
And this is separate from the time-efficiency issue: "how quickly can I answer my complex question which requires several logical joins?", which is where ChatGPT really shines.
GPT is very useful as a knowledge tool, but I don’t see people going there to make purchasing decisions. It replaces stackoverflow and quora, not Google. For shopping, I need to see the top X raw results, with reviews, so I can come to my own conclusion. Many people even find shopping fun (I don’t) and wouldn’t want to replace the experience with a chatbot even if it were somehow objectively better.
Peak is perhaps the wrong word, local maximum before falling into the Trough of Disillusionment.
Did you see the recent article about a restaurant changing its name to "Thai Food near me"?
People stopping to use google for the small stuff will be the beginning of the end of google being the mental default for searches.
I go to Amazon if I want to find a book or a specific product.
For the latest news, I come here, or Reddit, or sometimes twitter.
If I want to look up information about a famous person or topic, I go to Wikipedia (usually via google search). I know I can ask ChatGPT, but Wikipedia is generally more up to date, well-written and highly scrutinized by humans.
The jury’s still out on exactly what role ChatGPT will serve in the long term, but we’ve seen this kind of unbundling many times before and Google is still just as popular and useful as ever.
It seems like GPT’s killer app is helping guide your learning of a new topic, like having a personal tutor. I don’t see that replacing all aspects of a general purpose search engine though.
Compare ChatGPT to a dog-- a dog's experience of an audible "sit" command maps to that particular dog's history of experience, manipulated through pain or pleasure (i.e. if you associate treat + "sit", you'll have a dog with its own grounded definition of sit). A human also learns words like "sit," and we always have our own understanding of those words, even if we can agree on them together too certain degrees in lines of linguistic corpora. In fact, the linguistic corpora is borne out of our experiences, our individual understandings, and that's a one way arrow, so something trained purely on that resultant data is always an abstraction level away from experience, and therefore from true grounded understanding or truth. Hence GPT (and all deep learning) unsolvable hallucination or grounding problems.
And for my single query above, ChatGPT searched multiple sources, aggregated the results, and offered a summary and recommendations, which is a lot more than Google would have done.
ChatGPT's major current limitation is that it just refuses to answer certain questions [what is the email address for person.name?] or gets very woke with some other answers.
She/he/it/them is an amazing programming tutor.
You’re talking about hype cycles now. Previously it seemed like you said AI was not going to be advancing.
LLMs are maybe headed into oversold territory, but LLMs are not the end of AI, even in the near term. They are just the UI front end.