zlacker

[parent] [thread] 6 comments
1. famous+(OP)[view] [source] 2023-07-05 17:42:35
Open AI spent at least hundreds of millions on GPT-4 compute. Assuming they aren't lying, a fifth of compute budget (billions) is an awful lot of money to put on an issue they don't think is as pertinent as they are presenting.

Not that I think Super Intelligence can be aligned anyway.

Point is, whether they are right or wrong, I believe they genuinely think this to be an issue.

replies(3): >>rmilej+ha >>anothe+1r >>arisAl+vM
2. rmilej+ha[view] [source] 2023-07-05 18:15:06
>>famous+(OP)
Just curious, why might we not be able to align super intelligence? I’m extremely ignorant in this space so forgive me if it’s a dumb question but I am definitely curious to learn more
replies(1): >>famous+8j
◧◩
3. famous+8j[view] [source] [discussion] 2023-07-05 18:47:50
>>rmilej+ha
1. Models aren't "programmed" so much as "grown". We know how GPT is trained but we don't know what it is learning exactly to predict the next token. What do the weights do ? We don't know. This is obviously problematic because it makes interpretability not much better than for humans. How can you ascertain to control something you don't even understand ?

2. Hundreds of thousands of years on earth and we can't even align ourselves.

3. SuperIntelligence would be by definition unpredictable. If we could predict its answers to our problems, it wouldn't be necessary. You can't control what you can't predict.

4. anothe+1r[view] [source] 2023-07-05 19:21:05
>>famous+(OP)
A more cynical take would be they'll be spending the compute on more mundane engineering problems like making sure the AI doesn't say any naughty words, while calling it "Super Intelligence Alignment Research."
replies(1): >>skinne+fp2
5. arisAl+vM[view] [source] 2023-07-05 20:59:52
>>famous+(OP)
It's very obvious that it is an issue. Everyone but a few denialists get it instantly "hey, would you like to build something smarter without knowing how to control it"?
◧◩
6. skinne+fp2[view] [source] [discussion] 2023-07-06 09:15:21
>>anothe+1r
This effort is led by Ilya Sutskever. Listening to a bunch of interviews with him, and talking to a bunch of people who know him personally, I don't think he cares at all about AIs saying naughty words.
replies(1): >>anothe+fI2
◧◩◪
7. anothe+fI2[view] [source] [discussion] 2023-07-06 11:52:40
>>skinne+fp2
Openai certainly does, they are sending out emails to people using the AI for NSFW roleplay and warning them they'll be banned if they continue. They've also recently updated their API to make it harder to generate NSFW content.
[go to top]