zlacker

[return to ""I just bought a 2024 Chevy Tahoe for $1""]
1. isp+1[view] [source] 2023-12-18 12:08:51
>>isp+(OP)
A cautionary tale for why not to put unfiltered ChatGPT output directly to customers.

Nitter mirror: https://nitter.net/ChrisJBakke/status/1736533308849443121

Related - "New kind of resource consumption attack just dropped": https://twitter.com/loganb/status/1736449964006654329 | https://nitter.net/loganb/status/1736449964006654329

◧◩
2. iLoveO+13[view] [source] 2023-12-18 12:38:39
>>isp+1
There's no such thing as a filtered LLM output.

How do you plan on avoiding leaks or "side effects" like the tweet here?

If you just look for keywords in the output, I'll ask ChatGPT to encode its answers in base64.

You can literally always bypass any safeguard.

◧◩◪
3. mewpme+Af[view] [source] 2023-12-18 13:49:51
>>iLoveO+13
But what's the point of doing all of that? What's the point of tricking the Customer Support GPT to say that the other brand is better.

You could as well "Inspect Element" to change content on a website, then take a screenshot.

If you are intentionally trying to trick it, it doesn't matter if it is willing to give you a recipe.

◧◩◪◨
4. chanks+uu[view] [source] 2023-12-18 14:59:29
>>mewpme+Af
From my perspective (as someone who has never done this personally) I read these as a great way to convince companies to stop half-assedly shoving GPT into everything. If you just connect something up to the GPT API and write a simple "You're a helpful car sales chat assistant" kind of prompt you're asking for people to abuse it like this and I think these companies need to be aware of that.
[go to top]