zlacker

[return to "Is Google’s 20-year search dominance about to end?"]
1. dilap+1b[view] [source] 2023-02-08 21:42:38
>>i13e+(OP)
Microsoft's integration of ChatGPT with Bing is really bad. No-one wants a busy search page with a side-bar of ChatGPT; what's the point of that?

The correct interface for ChatGPT + search is just...ChatGPT. But it can also show you a list of web search results, when it's appropriate.

A super-clean interface, that always shows you exactly what you want.

That would be a killer feature and represent a real threat to Google.

◧◩
2. mattwa+Xc[view] [source] 2023-02-08 21:50:23
>>dilap+1b
As someone else said, the problem is ChatGPT lies straight to your face, whereas at least Google's answers are based on structured data by someone. It's traceable, whereas I would not trust ChatGPT ever to tell me the correct temperature to cook a steak.
◧◩◪
3. nearbu+wh[view] [source] 2023-02-08 22:08:19
>>mattwa+Xc
Perhaps a bad example, since ChatGPT consistently gets the steak temperature right (or at least gives the same values as Google). Internal temperature of about 130-135°F (54-57°C) for medium rare, etc.
◧◩◪◨
4. emoden+Ci[view] [source] 2023-02-08 22:12:24
>>nearbu+wh
“It tells me the right answer” and “I trust it to give me the right answer” are two different propositions.
◧◩◪◨⬒
5. nearbu+ns[view] [source] 2023-02-08 22:53:50
>>emoden+Ci
It's just one data point but it may indicate the commenter is miscalibrated on what ChatGPT answers well and what it doesn't.

It reminds me of when Wikipedia was new and we were told repeatedly that we couldn't trust it (by teachers, articles, etc.). It didn't matter if we could point to studies that found Wikipedia had similar accuracy to other encyclopedias. They objected on the grounds that anyone could edit it and anything on it _could_ be wrong and there's no publisher or paid editors to ensure accuracy.

ChatGPT tends to do well on common questions, where the answer is plastered in hundreds of articles across the internet. The internal cooking temperature of a steak is a great example of this. There are many other types of questions it fails at.

A better example of where you shouldn't trust ChatGPT is asking it the distance between two cities. It'll get it right for a few common pairs of cities (eg London to Paris), but it'll give you a wrong answer for most less common pairs (eg London to Cairo).

◧◩◪◨⬒⬓
6. Discou+lr1[view] [source] 2023-02-09 06:50:16
>>nearbu+ns
The difference between ChatGPT and Wikipedia is that Wikipedia actually cites its sources.

Also ChatGPT is only knowledgeable about general things, but even there it makes errors. It's basically a very complex scraping algorithm, the more interesting part is the language generation; even then this stuff seems at least unethical if not illegal since it's using other people's work/research without citation.

◧◩◪◨⬒⬓⬔
7. zirgs+BE1[view] [source] 2023-02-09 08:59:17
>>Discou+lr1
In many cases Wikipedia's source links are dead. And nobody bothers to update them. Or they cite some random book that I have no way to get and read myself.

Who checks if that book actually exists and is not made up? Especially if it's not a highly politically charged topic.

And then there are circular citations. Someone posts unverified/false info on wikipedia. Then it gets cited by some blog or other media that wikipedia considers trustworthy. And that then gets added to wikipedia as a reliable source.

◧◩◪◨⬒⬓⬔⧯
8. Discou+bi3[view] [source] 2023-02-09 17:31:47
>>zirgs+BE1
Well at least on Wikipedia you can check those things.
[go to top]