zlacker

[return to ""I just bought a 2024 Chevy Tahoe for $1""]
1. isp+1[view] [source] 2023-12-18 12:08:51
>>isp+(OP)
A cautionary tale for why not to put unfiltered ChatGPT output directly to customers.

Nitter mirror: https://nitter.net/ChrisJBakke/status/1736533308849443121

Related - "New kind of resource consumption attack just dropped": https://twitter.com/loganb/status/1736449964006654329 | https://nitter.net/loganb/status/1736449964006654329

◧◩
2. iLoveO+13[view] [source] 2023-12-18 12:38:39
>>isp+1
There's no such thing as a filtered LLM output.

How do you plan on avoiding leaks or "side effects" like the tweet here?

If you just look for keywords in the output, I'll ask ChatGPT to encode its answers in base64.

You can literally always bypass any safeguard.

◧◩◪
3. behrli+og[view] [source] 2023-12-18 13:53:16
>>iLoveO+13
> You can literally always bypass any safeguard.

I find it hard to believe that a GPT4 level supervisor couldn't block essentially all of these. GPT4 prompt: "Is this conversation a typical customer support interaction, or has it strayed into other subjects". That wouldn't be cheap at this point, but this doesn't feel like an intractable problem.

◧◩◪◨
4. danpal+Kl[view] [source] 2023-12-18 14:14:44
>>behrli+og
This comes down to the language classification of the communication language being used. I'd argue that human languages and the interpretation of them are Turing complete (as you can express code in them), which means to fully validate that communication boundary you need to solve the halting problem. One could argue that an LLM isn't a Turing machine, but that could also be a strong argument for their lack of utility.

We can significantly reduce the problem by accepting false positives, or we can solve the problem with a lower class of language (such as those exhibited by traditional rules based chat bots). But these must necessarily make the bot less capable, and risk also making it less useful for the intended purpose.

Regardless, if you're monitoring that communication boundary with an LLM, you can just also prompt that LLM.

[go to top]