zlacker

[return to ""I just bought a 2024 Chevy Tahoe for $1""]
1. remram+m9[view] [source] 2023-12-18 13:21:53
>>isp+(OP)
Is there any indication that they will get the car? Getting a chatbot to say "legally binding" probably doesn't make it so. Just like changing the HTML of the catalog to edit prices doesn't entitle you to anything.
◧◩
2. roland+Md[view] [source] 2023-12-18 13:42:02
>>remram+m9
No. The author is demonstrating a concept - that there are many easy inroads to twisting ChatGPT around your finger. It was very tongue in cheek - a joke - the author has no true expectation of getting the car for $1.
◧◩◪
3. mewpme+rg[view] [source] 2023-12-18 13:53:31
>>roland+Md
But why is it so much different from "Inspect Element" and then changing website content to whatever you please?

I guess why is there an expectation that GPT must be not trickable by bad actors to produce whatever content.

What matters is that it would give good content to honest customers.

◧◩◪◨
4. ceejay+9h[view] [source] 2023-12-18 13:56:15
>>mewpme+rg
> But why is it so much different from "Inspect Element" and then changing website content to whatever you please?

For the same reasons forging a contract is different from getting an idiot to sign one.

◧◩◪◨⬒
5. mewpme+vh[view] [source] 2023-12-18 13:57:36
>>ceejay+9h
You just add a disclaimer that none of what the bot says is legally binding, and it's an aid tool for finding the information that you are looking for. What's the problem with that?
◧◩◪◨⬒⬓
6. bumby+Hk[view] [source] 2023-12-18 14:10:13
>>mewpme+vh
Anytime a solution to a potentially complex problem is to the tune of "all you've got to do is..." may be an indicator that it's not a well thought out solution.
◧◩◪◨⬒⬓⬔
7. mewpme+cs[view] [source] 2023-12-18 14:47:25
>>bumby+Hk
> The thread will allow replies given a delay that’s sufficient to try to avoid knee-jerk responses. Pretty ironic (or telling) that you responded in this way given the context of the discussion.

You are right - it does seem to allow. But I'm not sure what you exactly mean after 20 minutes as well.

◧◩◪◨⬒⬓⬔⧯
8. bumby+vD[view] [source] 2023-12-18 15:39:11
>>mewpme+cs
Your original point was:

>You just add a disclaimer that none of what the bot says is legally binding

The combination of legality and AI can make for a complex and nuanced problem. A superficial solution like "just add a disclaimer" probably doesn't not capture the nuance to make for a great outcome. I.e., a superficial understanding leads us to oversimplify our solutions. Just like with the responses, it seems like you are in more of a hurry to send a retort than to understand the point.

◧◩◪◨⬒⬓⬔⧯▣
9. mewpme+yQ1[view] [source] 2023-12-18 21:27:25
>>bumby+vD
I'm still not understanding the point though, 6 hours later.

Why can't it just be a tool for assistance that is not legally binding?

Also throughout this year I have thought about those problems, and to me it's always been weird how people have so much problems with "hallucinations". And I've thought about exact similar ChatBot as Chevy used and how awesome it would be to be able to use something like that myself to find products.

To me the expectations of this having to be legally binding, etc just seem misguided.

AI tools increase my productivity so much, and also people often make up things, lie, but it's even more difficult to tell when they do that, as everyone's different and everyone lies differently.

◧◩◪◨⬒⬓⬔⧯▣▦
10. bumby+5b2[view] [source] 2023-12-18 23:27:00
>>mewpme+yQ1
>To me the expectations of this having to be legally binding, etc just seem misguided.

I think you're getting my point confused with a tangentially related one. Your point may be "chatbots shouldn't be legally binding" and I would tend to agree. But my point was that simply throwing a disclaimer on it may not be the best way to get there.

Consider if poison control uses a chatbot to answer phone calls and give advice. They can't waive their responsibility by just throwing a disclaimer on it. It doesn't meet the current strict liability standards regarding what kind of duty is required. There is such a thing in law as "duty creep," and there may be a liability if a jury finds it a reasonable expectation that a chatbot provides accurate answers. To my point, the duty is going to be largely context-dependent, and that means broad-brushed superficial "solutions" probably aren't sufficient.

[go to top]