zlacker

[parent] [thread] 2 comments
1. mrtksn+(OP)[view] [source] 2023-12-18 13:35:45
The OpenAI platform can utilize function calling and documents(you can upload files which ChatGPT can refer to). For examples, you can build an assistant that knows specifics about your product and can take actions for you, it can offer the customer a car from the inventory with the requirements they demand and schedule a test drive appointment. You don’t have to engineer or train an LLM, you can simply tell an existing one to act in a specific way.

In this particular case they screwed up the implementation.

replies(1): >>wahnfr+9h
2. wahnfr+9h[view] [source] 2023-12-18 14:54:47
>>mrtksn+(OP)
If this is a screw-up, what isn’t? You’re saying it’s user error rather than the tech being ineffective, so what sales chat bots are correct?
replies(1): >>mrtksn+1l
◧◩
3. mrtksn+1l[view] [source] [discussion] 2023-12-18 15:14:27
>>wahnfr+9h
I don’t know other sales chat bots, I’m simply explaining how this works. It appears that they improved the implementation later.

Besides, what makes you think that it’s ineffective? Any reason to believe that the chat bot was bad in fulfilling legitimate user requests? FYI, someone making it act outside of its intended purpose affects only that person’s experience.

It’s a DAN attack, people are having lots of fun with this type of prompt engineering.

It’s just some fun in the expense of the company paying for the API. The kind of fun that kids in the early days of the web were having by hacking websites to make it say something funny - just less harmful because no one else sees it.

[go to top]