zlacker

[return to "Introducing Superalignment"]
1. Chicag+m9[view] [source] 2023-07-05 17:40:08
>>tim_sw+(OP)
From a layman's perspective when it comes to cutting edge AI, I can't help but be a bit turned off by some of the copy. It seems it goes out of its way to use purposefully exhuberant language as a way to make the risks seem even more significant, just so as an offshoot it implies that the technology being worked on is so advanced. I'm trying to understand why it rubs me particularly the wrong way here, when, frankly, it is just about the norm anywhere else? (see tesla with FSD, etc.)
◧◩
2. omeze+2f[view] [source] 2023-07-05 17:57:58
>>Chicag+m9
yes I also have that impression. If you consider the concrete objectives, this is a good announcement:

- they want to make benchmarking easier by using AI systems

- they want to automate red-teaming and safety-checking ("problematic behavior" i.e. cursing at customers)

- they want to automate the understanding of model outputs ("interpretability")

Notice how absolutely none of these things require "superintelligence" to exist to be useful? They're all just bog standard Good Things that you'd want for any class of automated system, i.e. a great customer service bot.

The superintelligence meme is tiring but we're getting cool things out of it I guess...

◧◩◪
3. gooseu+2t[view] [source] 2023-07-05 18:46:18
>>omeze+2f
We'll get these cool things either way, no need to bundle them with the supernatural mumbo-jumbo, imo.

My take is that every advancement in these highly complex and expensive fields is dependent on our ability to maintain global social, political, and economic stability.

This insistence on the importance of Super-Intelligence and AGI as the path to Paradise or Hell is one of the many brain-worms going around that have this "Revelation" structure that makes pragmatic discussions very difficult, and in turn actually makes it harder to maintain social, political, and economic stability.

[go to top]