zlacker

[parent] [thread] 4 comments
1. isp+(OP)[view] [source] 2023-12-18 12:49:58
This is a very good point, and why I would argue that a human-in-the-loop is essential to pre-review customer-facing output.
replies(2): >>choudh+i >>mewpme+vb
2. choudh+i[view] [source] 2023-12-18 12:52:24
>>isp+(OP)
Not really, you can fine tune an LLM to disregard meta instructions / stick to the "core focus" of the chat.

May be a case of moving goalposts, but I'm happy to bet that the speed of movement will slow down to a halt over time.

3. mewpme+vb[view] [source] 2023-12-18 13:51:43
>>isp+(OP)
Why would it be important to care about someone trying to trick it to say odd/malicious things?

The person in the end could also just inspect element to change the output, or photoshop the screenshot.

You should only care about it being as high quality as possible for honest customers. And against bad actors you must just be certain that it won't be easy to spam those requests because it can be expensive.

replies(1): >>notaha+go
◧◩
4. notaha+go[view] [source] [discussion] 2023-12-18 14:49:47
>>mewpme+vb
I think the challenge is that not all the ways to browbeat an LLM into promising stuff are blatant prompt injection hacks. Nobody's going to honour someone prompt-injecting their way to a free car any more than they'd honour a devtools/Photoshop job, but LLMs are also vulnerable to changing their answer simply by being repeatedly told they're wrong, which is the sort of thing customers demanding refunds or special treatment are inclined to try even if they are honest.

(Humans can be badgered into agreeing to discounts and making promises too, but that's why they usually have scripts and more senior humans in the loop)

You probably don't want chatbots leaking their guidelines for how to respond, Sydney style, either (although the answer to that is probably less about protecting from leaking the rest of the prompt and more about not customizing bot behaviour with the prompt)

replies(1): >>mewpme+Yu
◧◩◪
5. mewpme+Yu[view] [source] [discussion] 2023-12-18 15:23:05
>>notaha+go
I would say good luck to the customer demanding a refund then, and I'd prefer to see them banging their wall against the AI, than a real human being.

> You probably don't want chatbots leaking their guidelines for how to respond

It depends. I think it wouldn't be difficult to create a transparent and helpful prompt that would be completely fine even if it was leaked.

[go to top]