zlacker

[return to "Introducing Superalignment"]
1. Chicag+m9[view] [source] 2023-07-05 17:40:08
>>tim_sw+(OP)
From a layman's perspective when it comes to cutting edge AI, I can't help but be a bit turned off by some of the copy. It seems it goes out of its way to use purposefully exhuberant language as a way to make the risks seem even more significant, just so as an offshoot it implies that the technology being worked on is so advanced. I'm trying to understand why it rubs me particularly the wrong way here, when, frankly, it is just about the norm anywhere else? (see tesla with FSD, etc.)
◧◩
2. famous+ha[view] [source] 2023-07-05 17:42:35
>>Chicag+m9
Open AI spent at least hundreds of millions on GPT-4 compute. Assuming they aren't lying, a fifth of compute budget (billions) is an awful lot of money to put on an issue they don't think is as pertinent as they are presenting.

Not that I think Super Intelligence can be aligned anyway.

Point is, whether they are right or wrong, I believe they genuinely think this to be an issue.

◧◩◪
3. rmilej+yk[view] [source] 2023-07-05 18:15:06
>>famous+ha
Just curious, why might we not be able to align super intelligence? I’m extremely ignorant in this space so forgive me if it’s a dumb question but I am definitely curious to learn more
◧◩◪◨
4. famous+pt[view] [source] 2023-07-05 18:47:50
>>rmilej+yk
1. Models aren't "programmed" so much as "grown". We know how GPT is trained but we don't know what it is learning exactly to predict the next token. What do the weights do ? We don't know. This is obviously problematic because it makes interpretability not much better than for humans. How can you ascertain to control something you don't even understand ?

2. Hundreds of thousands of years on earth and we can't even align ourselves.

3. SuperIntelligence would be by definition unpredictable. If we could predict its answers to our problems, it wouldn't be necessary. You can't control what you can't predict.

[go to top]