zlacker

[return to "Introducing Superalignment"]
1. Chicag+m9[view] [source] 2023-07-05 17:40:08
>>tim_sw+(OP)
From a layman's perspective when it comes to cutting edge AI, I can't help but be a bit turned off by some of the copy. It seems it goes out of its way to use purposefully exhuberant language as a way to make the risks seem even more significant, just so as an offshoot it implies that the technology being worked on is so advanced. I'm trying to understand why it rubs me particularly the wrong way here, when, frankly, it is just about the norm anywhere else? (see tesla with FSD, etc.)
◧◩
2. omeze+2f[view] [source] 2023-07-05 17:57:58
>>Chicag+m9
yes I also have that impression. If you consider the concrete objectives, this is a good announcement:

- they want to make benchmarking easier by using AI systems

- they want to automate red-teaming and safety-checking ("problematic behavior" i.e. cursing at customers)

- they want to automate the understanding of model outputs ("interpretability")

Notice how absolutely none of these things require "superintelligence" to exist to be useful? They're all just bog standard Good Things that you'd want for any class of automated system, i.e. a great customer service bot.

The superintelligence meme is tiring but we're getting cool things out of it I guess...

◧◩◪
3. gooseu+2t[view] [source] 2023-07-05 18:46:18
>>omeze+2f
We'll get these cool things either way, no need to bundle them with the supernatural mumbo-jumbo, imo.

My take is that every advancement in these highly complex and expensive fields is dependent on our ability to maintain global social, political, and economic stability.

This insistence on the importance of Super-Intelligence and AGI as the path to Paradise or Hell is one of the many brain-worms going around that have this "Revelation" structure that makes pragmatic discussions very difficult, and in turn actually makes it harder to maintain social, political, and economic stability.

◧◩◪◨
4. Dennis+bC[view] [source] 2023-07-05 19:23:43
>>gooseu+2t
There's nothing "supernatural" about thinking that an AGI could be smarter than humans, and therefore behave in ways that we dumb humans can't predict.

There's more mumbo-jumbo in thinking human intelligence has some secret sauce that can't be replicated by a computer.

◧◩◪◨⬒
5. gooseu+ZK[view] [source] 2023-07-05 20:05:38
>>Dennis+bC
Not if the "secret sauce" is actually a natural limit to what levels of intelligence can be reached with the current architectures we're exploring.

It could be theoretically possible to build an AGI smarter than a human, but is it really plausible if it turns out to need a data center the size of the Hadron Collider and the energy of a small country to maintain itself?

It could be that it turns out the only architecture we can find that is equal to the task (and feasibly produced) is the human brain, and instead the hard part of making super-intelligence is bootstrapping that human brain and training it to be more intel?

Maybe the best way to solve the "alignment problem", and other issues of creating super-intelligence, is to solve the problem of how best to raise and educate intelligent and well-adjusted humans?

◧◩◪◨⬒⬓
6. jodrel+SU[view] [source] 2023-07-05 20:50:49
>>gooseu+ZK
Well, that argument didn't work for a lot of other things. Wheels are more energy efficient than legs, steel more resilient than tortoise shell or rhino skin, motors more powerful than muscles, aircraft fly higher and faster than birds, ladders reach higher than Giraffes much more easily, bulldozers dig faster than any digging creature, speakers and airhorns are louder than any animal cry or roar, ancient computers remember more raw data than humans do, electronics can react faster than human reactions. Human working memory is ~7 items after 80 billion neurons, far outdone by an 8-bit computer of the 1980s.

Why think 'intelligence' is somehow different?

[go to top]