zlacker

[parent] [thread] 0 comments
1. Parano+(OP)[view] [source] 2023-11-20 20:59:22
https://twitter.com/thiagovscoelho/status/172650681847663424...

Here's tweet transcribing OpenAI's interim CEO Emmett Shear's views on AI safety, or see youtube video for original source. Some excerpts:

Preamble on his general pro-tech stance:

"I have a very specific concern about AI. Generally, I’m very pro-technology and I really believe in the idea that the upsides usually outweigh the downsides. Everything technology can be misused, but you should usually wait. Eventually, as we understand it better, you want to put in regulations. But regulating early is usually a mistake. When you do regulation, you want to be making regulations that are about reducing risk and authorizing more innovation, because innovation is usually good for us."

On why AI would be dangerous to humanity:

"If you build something that is a lot smarter than us—not like somewhat smarter, but much smarter than we are as we are than dogs, for example, like a big jump—that thing is intrinsically pretty dangerous. If it gets set on a goal that isn’t aligned with ours, the first instrumental step to achieving that goal is to take control. If this is easy for it because it’s really just that smart, step one would be to just kind of take over the planet. Then step two, solve my goal."

On his path to safe AI:

"Ultimately, to solve the problem of AI alignment, my biggest point of divergence with Eliezer Yudkowsky, who is a mathematician, philosopher, and decision theorist, comes from my background as an engineer. Everything I’ve learned about engineering tells me that the only way to ensure something works on the first try is to build lots of prototypes and models at a smaller scale and practice repeatedly. If there is a world where we build an AI that’s smarter than humans and we survive, it will be because we built smaller AIs and had as many smart people as possible working on the problem seriously."

On why skeptics need to stop side-stepping the debate:

"Here I am, a techno-optimist, saying that the AI issue might actually be a problem. If you’re rejecting AI concerns because we sound like a bunch of crazies, just notice that some of us worried about this are on the techno-optimist team. It’s not obvious why AI is a true problem. It takes a good deal of engagement with the material to see why, because at first, it doesn’t seem like that big of a deal. But the more you dig in, the more you realize the potential issues.

"I encourage people to engage with the technical merits of the argument. If you want to debate, like proposing a way to align AI or arguing that self-improvement won’t work, that’s great. Let’s have that argument. But it needs to be a real argument, not just a repetition of past failures."

[go to top]