zlacker

[parent] [thread] 3 comments
1. mewpme+(OP)[view] [source] 2023-12-18 13:49:51
But what's the point of doing all of that? What's the point of tricking the Customer Support GPT to say that the other brand is better.

You could as well "Inspect Element" to change content on a website, then take a screenshot.

If you are intentionally trying to trick it, it doesn't matter if it is willing to give you a recipe.

replies(2): >>iLoveO+fc >>chanks+Ue
2. iLoveO+fc[view] [source] 2023-12-18 14:45:22
>>mewpme+(OP)
In this specific case there isn't, but yesterday one of the top posts was about extracting private documents from writers.com for example.

https://promptarmor.substack.com/p/data-exfiltration-from-wr...

replies(1): >>mewpme+bd
◧◩
3. mewpme+bd[view] [source] [discussion] 2023-12-18 14:49:48
>>iLoveO+fc
That is however a problem of what kind of data you feed into the LLM's prompt.

If you accidentally put private data in the UI bundle, it's the same thing.

4. chanks+Ue[view] [source] 2023-12-18 14:59:29
>>mewpme+(OP)
From my perspective (as someone who has never done this personally) I read these as a great way to convince companies to stop half-assedly shoving GPT into everything. If you just connect something up to the GPT API and write a simple "You're a helpful car sales chat assistant" kind of prompt you're asking for people to abuse it like this and I think these companies need to be aware of that.
[go to top]