zlacker

[return to "Sam Altman goes before US Congress to propose licenses for building AI"]
1. srslac+I7[view] [source] 2023-05-16 12:00:15
>>vforgi+(OP)
Imagine thinking that regression based function approximators are capable of anything other than fitting the data you give it. Then imagine willfully hyping up and scaring people who don't understand, and because it can predict words you take advantage of the human tendency to anthropomorphize, so it follows that it is something capable of generalized and adaptable intelligence.

Shame on all of the people involved in this: the people in these companies, the journalists who shovel shit (hope they get replaced real soon), researchers who should know better, and dementia ridden legislators.

So utterly predictable and slimy. All of those who are so gravely concerned about "alignment" in this context, give yourselves a pat on the back for hyping up science fiction stories and enabling regulatory capture.

◧◩
2. lm2846+qd[view] [source] 2023-05-16 12:33:15
>>srslac+I7
100% this, I don't get how even on this website people are so clueless.

Give them a semi human sounding puppet and they think skynet is coming tomorrow.

If we learned anything from the past few months is how gullible people are, wishful thinking is a hell of a drug

◧◩◪
3. digbyb+He[view] [source] 2023-05-16 12:40:26
>>lm2846+qd
I’m open minded about this, I see people more knowledgeable than me on both sides of the argument. Can someone explain how Geoffrey Hinton can be considered to be clueless?
◧◩◪◨
4. lm2846+2i[view] [source] 2023-05-16 12:58:23
>>digbyb+He
He doesn't talk about skynet afaik

> Some of the dangers of AI chatbots were “quite scary”, he told the BBC, warning they could become more intelligent than humans and could be exploited by “bad actors”. “It’s able to produce lots of text automatically so you can get lots of very effective spambots. It will allow authoritarian leaders to manipulate their electorates, things like that.”

You can do bad things with it but people who believe we're on the brink of singularity, that we're all going to lose our jobs to chatgpt and that world destruction is coming are on hard drugs.

◧◩◪◨⬒
5. cma+3C[view] [source] 2023-05-16 14:37:26
>>lm2846+2i
> You can do bad things with it but people who believe we're on the brink of singularity, that we're all going to lose our jobs to chatgpt and that world destruction is coming are on hard drugs.

Geoff Hinton, Stuart Russell, Jürgen Schmidhuber and Demis Hassabis all talk about something singularity-like as fairly near term, and all have concerns with ruin, though not all think it is the most likely outcome.

That's the backprop guy, top AI textbook guy, co-inventor of LSTMs (only thing that worked well for sequences before transformers)/highwaynets-resnets/arguably GANs, and the founder of DeepMind.

Schmidhuber (for context, he was talking near term, next few decades):

> All attempts at making sure there will be only provably friendly AIs seem doomed. Once somebody posts the recipe for practically feasible self-improving Goedel machines or AIs in form of code into which one can plug arbitrary utility functions, many users will equip such AIs with many different goals, often at least partially conflicting with those of humans. The laws of physics and the availability of physical resources will eventually determine which utility functions will help their AIs more than others to multiply and become dominant in competition with AIs driven by different utility functions. Which values are "good"? The survivors will define this in hindsight, since only survivors promote their values.

Hassasbis:

> We are approaching an absolutely critical moment in human history. That might sound a bit grand, but I really don't think that is overstating where we are. I think it could be an incredible moment, but it's also a risky moment in human history. My advice would be I think we should not "move fast and break things." [...] Depending on how powerful the technology is, you know it may not be possible to fix that afterwards.

Hinton:

> Well, here’s a subgoal that almost always helps in biology: get more energy. So the first thing that could happen is these robots are going to say, ‘Let’s get more power. Let’s reroute all the electricity to my chips.’ Another great subgoal would be to make more copies of yourself. Does that sound good?

Russell:

“Intelligence really means the power to shape the world in your interests, and if you create systems that are more intelligent than humans either individually or collectively then you’re creating entities that are more powerful than us,” said Russell at the lecture organized by the CITRIS Research Exchange and Berkeley AI Research Lab. “How do we retain power over entities more powerful than us, forever?”

“If we pursue [our current approach], then we will eventually lose control over the machines. But, we can take a different route that actually leads to AI systems that are beneficial to humans,” said Russell. “We could, in fact, have a better civilization.”

◧◩◪◨⬒⬓
6. tomrod+EL[view] [source] 2023-05-16 15:22:19
>>cma+3C
With due respect, the inventors of a thing rarely turn into the innovators or implementers of a thing.

Should we be concerned about networked, hypersensing AI with bad code? Yes.

Is that an existential threat? Not so long as we remember that there are off switches.

Should we be concerned about kafkaesqe hellscapes of spam and bad UX? Yes.

Is that an existential threat? Sort of, if we ceded all authority to an algorithm without a human in the loop with the power to turn it off.

There is a theme here.

◧◩◪◨⬒⬓⬔
7. Number+TS1[view] [source] 2023-05-16 20:32:32
>>tomrod+EL
We've already ceded all authority to an algorithm that no one can turn off. Our political and economic structures are running on their own, and no single human or even group of humans can really stop them if they go off the rails. If it's in humanity's best interest for companies not to dump waste anywhere they want, but individual companies benefit from cheap waste disposal, and they lobby regulators to allow it, that sort of lose-lose situation can go on for a very long time. It might be better if everyone could coordinate so that all companies had to play by the same rules, and we all got a cleaner environment. But it's very hard to break out.

Do I think capitalism has the potential to be as bad as a runaway AI? No. I think that it's useful for illustrating how we could end up in a situation where AI takes over because every single person has incentives to keep it on, even when the outcome of all people keeping it running turns out to be really bad. A multi-polar trap, or "Moloch" problem. It seems likely to end up with individual actors all having incentives to deploy stronger and smarter AI, faster and faster, and not to turn them off even as they start to either do bad things to other people or just the sheer amount of resources dedicated to AI starts to take its toll on earth.

That's assuming we've solved alignment, but that neither we or AGI has solved the coordination problem. If we haven't solved alignment, and AGIs aren't even guaranteed to act in the interest of the human that tries to control them, then we're in worse shape.

Altman used the term "cambrian explosion" referring to startups, but I think it also applies to the new form of life we're inventing. It's not self-replicating yet, but we are surely on-track on making something that will be smart enough to replicate itself.

As a thought experiment, you could imagine a primitive AGI, if given completely free reign, might be able to get to the point where it could bootstrap self-sufficiency -- first hire some humans to build it robots, buy some solar panels, build some factories that can plug into our economy to build factories and more solar panels and GPUs, and get to a point where it is able to survive and grow and reproduce without human help. It would be hard, it would need either a lot of time, or a lot of AI minds working together.

But that's like a human trying to make a sandwich by farming or raising every single ingredient, wheat, pigs, tomatoes, etc, though. A much more effective way is to just make some money and trade for what you need. That depends on AIs being able to own things, or just a human turning over their bank account to an AI, which has already happened and probably will keep happening.

My mind goes to a scenario where AGI starts out doing things for humans, and gradually transitions to just doing things, and at some point we realize "oops", but there was never a point along the way where it was clear that we really had to stop. Which is why I'm so adamant that we should stop now. If we decide that we've figured out the issues and can start again later, we can do that.

[go to top]