> there's an entire collection of things that can be answered directly by ChatGPT. [...] assuming chatGPT is good enough eventually
I am a lot less impressed with this. And I know I'm an outlier and plenty of people are shocked at how good GPT is at this kind of problem, so I am constantly second-guessing myself and thinking to myself, "are we using the same product?" Because I think ChatGPT produces really bad quality information. It's cool, it's wildly impressive, it's a massive achievement and an incredible milestone for AI, but 'cool' is different from 'useful.'
Leaving aside the problem that answering simple questions is a very small subset of what search is used for, and isn't on its own probably a big enough category of questions to make me change search engines, the bigger problem is that the current state of ChatGPT seems to be wildly inconsistent about what it knows and what it doesn't know and I don't have a way to pre-predict what categories of information it's safe to ask about. And the only way for me to verify the answers it gives me are to... double check its work with a real search.
I would not advise anyone to ask ChatGPT for advice about what drugs are safe to take while high, that seems profoundly unwise to me.
So it's a bit like Instant Answers. Google has been trying to auto-answer questions for ages, and in practice the only time it's ever been useful for me is when it's extremely predictable and when I know that a category of question will only ever have its answer pulled from one site and where I know what the format of that answer will be.
Unpredictability is generally a quality that I try to avoid any time that I am using a computer. One of the primary strengths of a computer to me is specificity and predictability. And so the bar here is really high. The question I ask myself is, "would I want to replace a search engine with a human assistant?" And I think the answer is no, I feel like that would be missing the point of what a search engine is. And ChatGPT gives worse answers than a human assistant would, and its sources/knowledge is just as unpredictable as a human's would be if not worse. So, I also don't want to replace my search engine with ChatGPT.
It could get more accurate in the future, and if it does then maybe my opinion will change then, but... it's hard for me to get excited about using a worse product today on the promise that it might get better in the future. And I guess it's accurate enough that a bunch of people keep telling me that they're saving time when they use it, so maybe I don't understand what I'm talking about. But I just don't see how people are reaching that conclusion unless they're either asking questions where they don't actually care about the accuracy or unless they're just rolling the dice and trusting that ChatGPT won't accidentally poison them when they ask what drug combinations they can take.
First, you have people who don’t have any idea about how any of this works and are generally far removed from the tech communities. They did not see this coming. To this community, ChatGPT is a fascinating toy. It’s not perfect, but at this point everyone is conditioned to believe that thinks will somehow get better. This group is excited.
Second, you have the tech community who is skeptical. This group of people sees everything that’s wrong with ChatGPT and see the magnitude of work needed for anything to even start approaching Google as a credible threat. This group is generally confused by the excitement going around because it doesn’t seem warranted, and worse, the excitement is being seen in people who should know better. There’s a range of responses from being dismissive to feeling like they’re being gaslighted.
Third, you have the people who are filling up the YC summer 23 applications. They’re all looking for big unsaturated markets to build a pitch deck around. ChatGPT looks like a very promising sign post that says “look for ideas here” to this group. They are excited. Most of them will fail. But if anyone survives and manages to thrive, where will they be 10 years from now? How about FDA approved chatbots integrated with a blood pressure monitor and thermometer that can take a first pass at routine prescription refills at 1/100th the cost of an equivalent doctors appointment? How about live translation of television events synthesized back to the original speaker’s voice? How about video game engines that can synthesize music loops dynamically to keep up with the gaming pace of that particular gaming session?
Sure you might say, but none of that “dethrones google”. My response to that is - what role does a text based internet play in daily life 10 years from now? Everything on the internet today went something along this path: primary research -> classrooms -> textbooks -> niche blogs/forums -> mainstream websites.
10 years from now, would you bet against primary research -> ChatGPT ingress -> widely deployed ChatGPT model? What role do ad driven websites play in this chain? What role does a search engine play in this chain?
Sure, it doesn’t make the internet nor search engines obsolete. But it changes how we do things. Potentially in a very big way.
I suspect I'm a little bit more skeptical about this than other people might be? 10 years out is a long way to predict and I'm hesitant to try, but I might be primed wrong looking at voice assistants/video/etc... where I think the format changes have been a little over-exaggerated in some ways in the past, and where I think people have traditionally underestimated how much staying power traditional models have. There were a lot of things that were supposed to kill a traditional text-based Internet, but the only thing that's come close is video, and that seems to get a lot of blowback; I'm not sure it was an improvement.
But regardless, your comment is a insightful perspective that gives me some alternative ways of looking at this. So I think I agree. I might be just more on the skeptical side of things of how well-suited the current tools are for building the kinds of applications you're describing, but I get the theory.
And generative AI that's separate from AI answering questions is kind of another story; I can absolutely imagine potential creative impacts around stuff like music loop generation, art generation, etc... I similarly am looking at the current state of things and saying, "well... the tech doesn't seem to be as good as people say it is, so I don't want to start using it now before it improves", but I can at least imagine how things might change if the tech does get good. I'm not sure it would be a "revolution" or that it's going to put artists out of business or whatever, but it could potentially lower the barrier of entry for certain applications. And certainly LLMs for general website classification I think would be a really good use that I wish was being pursued more.
At the least though, even if I'm skeptical, I can definitely understand why someone would have that perspective, and your summation of the different groups rings true to me.
I still don't think I want it in my current search box today though :) I don't think that LLMs are useless at all, I'm just not sure searching in specific is a good use for them.