zlacker

[parent] [thread] 14 comments
1. simion+(OP)[view] [source] 2023-04-12 12:14:11
For true Sims and LLM you would need an open source unrestricted model, Sims games simulate life so you have mean,evil,criminals sims and you also have adult stuff. ChatGPT is defective since I can give it same prompt 10 times and 2 of 10 will refuse to show it's output because it thinks it is too racist and created bad content while 8 of 10 times he did it.

So with a safe prompt there is always a chance the AI will go on a bad direction and then refuse to work, and make you pay for the tokens of his "I am sorry ....long speech"

replies(3): >>brooks+b2 >>js8+I4 >>selfho+am
2. brooks+b2[view] [source] 2023-04-12 12:27:11
>>simion+(OP)
There’s a huge middle ground between ChatGPT and an “open source unrestricted model”. OpenAI’s playground lets you specify your own system prompts for GPT4, which allows for a lot more latitude in user prompts and responses.

But if you’re looking to generate truly racist and vile stuff, yes, you probably will need a model with no training or inference filters. I’m not sure anyone’s investing in building that though.

replies(3): >>sebzim+n8 >>lightb+Af >>simion+gn1
3. js8+I4[view] [source] 2023-04-12 12:40:14
>>simion+(OP)
Do we really need full-blown ChatGPT to believably simulate Sims? I think something that understands just 1000 words of Basic English would suffice.
replies(1): >>notaha+za
◧◩
4. sebzim+n8[view] [source] [discussion] 2023-04-12 12:59:21
>>brooks+b2
>I’m not sure anyone’s investing in building that though.

LLaMA already exists.

replies(1): >>aaomid+0a4
◧◩
5. notaha+za[view] [source] [discussion] 2023-04-12 13:10:20
>>js8+I4
tbh, for characters interacting with a game world, a script and triggers would suffice (provided you don't mind the conversations being banal and repetitive, but perhaps a bit less banal and repetitive than a model trained on a restricted vocabulary). Maxis could have done it, but chose not to, wisely IMO

In theory, a full blown LLM gives you a lot more variety and ability to handle novel situations, but it also gives you a lot more potential for conversational gambits that don't affect the game mechanics in the way you want them to and general weirdness (I love the article's anecdote about the Sim who thinks his neighbour Adam Smith wrote Wealth of Nations!). I'm sure someone will ultimately end up designing great LLM-driven game experiences, but I don't imagine they'll look much like The Sims.

replies(1): >>AlecSc+hc
◧◩◪
6. AlecSc+hc[view] [source] [discussion] 2023-04-12 13:18:41
>>notaha+za
Problem with scripts is that at the end of the day you feel like you're just exploring a state machine.

It's what I always find with open world games for example. No matter what kind of character I build or how I behave I'm only ever going to get predetermined dialogue which I could have looked up and saved myself the time.

Yes in the sense of the Sims it's probably overkill (they sold millions of copies even without a script) but it's only being used as a playground to test the ideas and see what's possible, there's no suggestion that this is particularly what the Sims itself should be.

◧◩
7. lightb+Af[view] [source] [discussion] 2023-04-12 13:32:38
>>brooks+b2
State actors assuredly are (in addition to e.g. LLaMa); the cost of pysops, especially social engineering, will be reduced to pennies on the dollar.
replies(1): >>brooks+ok
◧◩◪
8. brooks+ok[view] [source] [discussion] 2023-04-12 13:52:48
>>lightb+Af
I don’t those state actors are releasing models as open and free though, as parent was looking for.
9. selfho+am[view] [source] 2023-04-12 14:00:05
>>simion+(OP)
Ez, just stick a GPT-powered "reverse-moderation" layer in front of the request. Rate the response of ChatGPT for "did it do what the user asked, or did it provide a cop-out", and if it was rated as disobedient, regenerate the response until you get something acceptable.
replies(1): >>simion+Zm
◧◩
10. simion+Zm[view] [source] [discussion] 2023-04-12 14:03:51
>>selfho+am
And why should I do this instead of Open AI, is user input is sane then they should retry a few times if their AI is racist or their filter is stupid until they give me what I asked.

imagine this issue when you are just the devloper and not the user, the user complains about this but you try and works for you, but then it fails again for user, in my case the word "monkey" might trigger ChatGPT to either create soem racist shit or it's moderation code to false flag itself.

replies(1): >>selfho+qW2
◧◩
11. simion+gn1[view] [source] [discussion] 2023-04-12 17:45:48
>>brooks+b2
The number of people that want to create racist content is small, the issue I had is when the API is itself racist, as I mentioned in other places, you give it safe prompt and the API generates unsafe stuff. There is already unsafe stuff in ChatGPT they only added a pathetic filter on top and made us p[ay for it\s failures.
replies(1): >>DonHop+0r1
◧◩◪
12. DonHop+0r1[view] [source] [discussion] 2023-04-12 17:58:08
>>simion+gn1
>The number of people that want to create racist content is small...

No it's not. Elon Musk thinks there's a huge demand for racist AI, and he's probably right, or at least alt-right.

Elon Musk yearns for AI devs to build 'anti-woke' rival ChatGPT bot:

https://www.theregister.com/2023/03/06/ai_roundup/

◧◩◪
13. selfho+qW2[view] [source] [discussion] 2023-04-13 01:37:56
>>simion+Zm
Whatever the ChatGPT API returns is “poisoned” by OpenAI themselves. The point of the reverse moderator is to ensure that the LLM produces the kind of output you want as a developer, including things like JSON schema conformance (OpenAI might throw a human-readable “As an AI blah” message in the place where you are parsing machine-readable JSON) - the reverse moderator takes care of detecting that and retrying, with the hope that a subsequent response will be “better”.

If you want a layer to moderate what the year is seeing, you can add that as well. The point of the reverse moderator is to get GPT to do what it’s told without lying about itself, more or less.

replies(1): >>simion+Ux9
◧◩◪
14. aaomid+0a4[view] [source] [discussion] 2023-04-13 12:32:55
>>sebzim+n8
That has filters in its weights
◧◩◪◨
15. simion+Ux9[view] [source] [discussion] 2023-04-14 21:36:41
>>selfho+qW2
But it makes no sense that I the OpenAI customer have to pay for the fact their product is racist.

Again: 1 I give them safe/clean prompt 2 AI returns 2 of 10 times unsafe crap that is filtered by them 3 I have to pay for my prompt, then have to catch they non deterministic response and retry again on my money

What should happen

1 customer give safe/clean prompt 2 AI response in racist/bad way 3 filter catches this , then it retries again, a few times, if the AI is still racist/bad then OpenAI automatically adds to the prompt "do not be a racist" 4 customer gets the answer

[go to top]