zlacker

[parent] [thread] 2 comments
1. wunder+(OP)[view] [source] 2023-12-18 14:47:40
A real Orderbot has the menu items and prices as part of the chat context. So an attacker can just overwrite them.

During my Ekoparty presentation about prompt injections, I talked about Orderbot Item-On-Sale Injection: https://youtu.be/ADHAokjniE4?t=927

We will see these kind of attacks in real world applications more often going forward - and I'm sure some ambitious company will have a bot complete orders at one point.

replies(1): >>alonso+O8
2. alonso+O8[view] [source] 2023-12-18 15:30:15
>>wunder+(OP)
I would expect these bots will be calling an ordering backend API which will validate the price of the items and the total. Are you suggesting people will plug open ended APIs that allow the bots to charge any amount without validations?

I think the first step will be replacing frontends with these bots, so most of the business logic should still apply and this won't be a valid attack vector. Horrible UX tho, as the transaction will fail.

replies(1): >>wunder+q51
◧◩
3. wunder+q51[view] [source] [discussion] 2023-12-18 20:00:14
>>alonso+O8
>> Are you suggesting people will plug open ended APIs that allow the bots to charge any amount without validations?

Certainly. A good example (not an Orderbot, but real world exploit) was "Chat with Code" Plugin, where ChatGPT was given full access to the Github API (which allowed to do many other things then reading code):

https://embracethered.com/blog/posts/2023/chatgpt-chat-with-...

If there are backend APIs, there will be an API to change a price or overwrite a price for a promotion and maybe the Orderbot will just get the context of a Swagger file (or other API documentation) and then know how to call APIs. I'm not saying every LLM driven Orderbot will have this problem, but it will be something to look for during security reviews and pentests.

[go to top]