zlacker

[parent] [thread] 42 comments
1. digbyb+(OP)[view] [source] 2023-05-16 12:40:26
I’m open minded about this, I see people more knowledgeable than me on both sides of the argument. Can someone explain how Geoffrey Hinton can be considered to be clueless?
replies(4): >>Random+n2 >>srslac+s2 >>lm2846+l3 >>Workac+km
2. Random+n2[view] [source] 2023-05-16 12:53:08
>>digbyb+(OP)
Not clueless. However, is he an expert in socio-political-economic issues arising from AI or in non-existent AGI? Technical insight into AI might not translate into either.
replies(1): >>etiam+xV
3. srslac+s2[view] [source] 2023-05-16 12:53:48
>>digbyb+(OP)
Hinton, in his own words, asked PaLM to explain a dad joke he had supposedly come up with and was so convinced that his clever and advanced joke would take a lifetime of experience to understand, despite PaLM perfectly articulating why the joke was funny, he quit Google and is, conveniently, still going to continue working on AI, despite the "risks." Not exactly the best example.
replies(1): >>digbyb+94
4. lm2846+l3[view] [source] 2023-05-16 12:58:23
>>digbyb+(OP)
He doesn't talk about skynet afaik

> Some of the dangers of AI chatbots were “quite scary”, he told the BBC, warning they could become more intelligent than humans and could be exploited by “bad actors”. “It’s able to produce lots of text automatically so you can get lots of very effective spambots. It will allow authoritarian leaders to manipulate their electorates, things like that.”

You can do bad things with it but people who believe we're on the brink of singularity, that we're all going to lose our jobs to chatgpt and that world destruction is coming are on hard drugs.

replies(4): >>digbyb+v4 >>cma+mn >>HDThor+Yy >>whimsi+fz1
◧◩
5. digbyb+94[view] [source] [discussion] 2023-05-16 13:02:26
>>srslac+s2
Hinton said that the ability to explain a joke was among the first things that made him reassess their capabilities. Not the only thing. You make it sound as though Hinton is obviously clueless yet there are few people with deeper knowledge and more experience working with neural networks. People told him he was crazy for thinking neural networks could do anything useful, now it seems people are calling his crazy for the reverse. I’m genuinely confused about this.
replies(2): >>srslac+ch >>reveli+Xh
◧◩
6. digbyb+v4[view] [source] [discussion] 2023-05-16 13:05:22
>>lm2846+l3
I’ll have to dig it up but the last interview I saw with him, he was focused more on existential risk from the potential for super intelligence, not just misuse.
replies(1): >>tomrod+kw
◧◩◪
7. srslac+ch[view] [source] [discussion] 2023-05-16 14:07:54
>>digbyb+94
I didn't say he was clueless, it's just not in good faith to suggest there's probable existential risk on a media tour where you're mined for quotes, and then continue to work on it.
◧◩◪
8. reveli+Xh[view] [source] [discussion] 2023-05-16 14:11:59
>>digbyb+94
Not clueless, but unfortunately engaging in motivated reasoning.

Google spent years doing nothing much with its AI because its employees (like Hinton) got themselves locked in an elitist hard-left purity spiral in which they convinced each other that if plebby ordinary non-Googlers could use AI they would do terrible things, like draw pictures of non-diverse people. That's why they never launched Imagen and left the whole generative art space to OpenAI, Stability and Midjourney.

Now the tech finally leaked out of their ivory tower and AI progress is no longer where he was at, but Hinton finds himself at retirement age and no longer feeling much like hard-core product development. What to do? Lucky lucky, he lives in a world where the legacy media laps up any academic with a doomsday story. So he quits and starts enjoying the life of a celebrity public intellectual, being praised as a man of superior foresight and care for the world to those awful hoi polloi shipping products and irresponsibly not voting for Biden (see the last sentence of his Wired interview). If nothing happens and the boy cried wolf then nobody will mind, it'll all be forgotten. If there's any way what happens can be twisted into interpreting reality as AI being bad though, he's suddenly the man of the hour with Presidents and Prime Ministers queuing up to ask him what to do.

It's all really quite pathetic. Academic credentials are worth nothing with respect to such claims and Hinton hasn't yet managed to articulate how, exactly, AI doom is supposed to happen. But our society doesn't penalize wrongness when it comes from such types, not even a tiny bit, so it's a cost-free move for him.

replies(1): >>digbyb+cq
9. Workac+km[view] [source] 2023-05-16 14:32:06
>>digbyb+(OP)
Given the skill AI has with programming showing up about 10 years sooner than anyone expected, I have seen a lot of cope in tech circles.

No one yet knows how this is going to go, coping might turn into "See! I knew all along!" if progress fizzles out. But right now the threat is very real and we're seeing the full spectrum of "humans under threat" behavior. Very similar to the early pandemic when you could find smart people with any take you wanted.

◧◩
10. cma+mn[view] [source] [discussion] 2023-05-16 14:37:26
>>lm2846+l3
> You can do bad things with it but people who believe we're on the brink of singularity, that we're all going to lose our jobs to chatgpt and that world destruction is coming are on hard drugs.

Geoff Hinton, Stuart Russell, Jürgen Schmidhuber and Demis Hassabis all talk about something singularity-like as fairly near term, and all have concerns with ruin, though not all think it is the most likely outcome.

That's the backprop guy, top AI textbook guy, co-inventor of LSTMs (only thing that worked well for sequences before transformers)/highwaynets-resnets/arguably GANs, and the founder of DeepMind.

Schmidhuber (for context, he was talking near term, next few decades):

> All attempts at making sure there will be only provably friendly AIs seem doomed. Once somebody posts the recipe for practically feasible self-improving Goedel machines or AIs in form of code into which one can plug arbitrary utility functions, many users will equip such AIs with many different goals, often at least partially conflicting with those of humans. The laws of physics and the availability of physical resources will eventually determine which utility functions will help their AIs more than others to multiply and become dominant in competition with AIs driven by different utility functions. Which values are "good"? The survivors will define this in hindsight, since only survivors promote their values.

Hassasbis:

> We are approaching an absolutely critical moment in human history. That might sound a bit grand, but I really don't think that is overstating where we are. I think it could be an incredible moment, but it's also a risky moment in human history. My advice would be I think we should not "move fast and break things." [...] Depending on how powerful the technology is, you know it may not be possible to fix that afterwards.

Hinton:

> Well, here’s a subgoal that almost always helps in biology: get more energy. So the first thing that could happen is these robots are going to say, ‘Let’s get more power. Let’s reroute all the electricity to my chips.’ Another great subgoal would be to make more copies of yourself. Does that sound good?

Russell:

“Intelligence really means the power to shape the world in your interests, and if you create systems that are more intelligent than humans either individually or collectively then you’re creating entities that are more powerful than us,” said Russell at the lecture organized by the CITRIS Research Exchange and Berkeley AI Research Lab. “How do we retain power over entities more powerful than us, forever?”

“If we pursue [our current approach], then we will eventually lose control over the machines. But, we can take a different route that actually leads to AI systems that are beneficial to humans,” said Russell. “We could, in fact, have a better civilization.”

replies(2): >>tomrod+Xw >>tome+wH
◧◩◪◨
11. digbyb+cq[view] [source] [discussion] 2023-05-16 14:49:56
>>reveli+Xh
I actually do hope you're right. I've been looking forward to an AI future my whole life and would prefer to not now be worrying about existential risk. It reminds me of when people started talking about how the LHC might create a blackhole and swallow the earth. But I have more confidence in the theories that convinced people it was nearly impossible to occur than what we're seeing now.

Everyone engages in motivated reasoning. The psychoanalysis you provide for Hinton could easily be spun in the opposite direction: a man who spent his entire adult life and will go down in history as "the godfather of" neural networks surely would prefer for that to have been a good thing. Which would then give him even more credibility. But these are just stories we tell about people. It's the arguments we should be focused on.

I don't think "how AI doom is supposed to happen" is all that big of a mystery. The question is simply: "is an intelligence explosion possible"? If the answer is no, then OK, let's move on. If the answer is "maybe", then all the chatter about AI alignment and safety should be taken seriously, because it's very difficult to know how safe a super intelligence would be.

replies(1): >>reveli+KK
◧◩◪
12. tomrod+kw[view] [source] [discussion] 2023-05-16 15:19:43
>>digbyb+v4
The NYT piece implied that, but no, his concern was less existential singularity and more on immoral use.
replies(1): >>cma+YD1
◧◩◪
13. tomrod+Xw[view] [source] [discussion] 2023-05-16 15:22:19
>>cma+mn
With due respect, the inventors of a thing rarely turn into the innovators or implementers of a thing.

Should we be concerned about networked, hypersensing AI with bad code? Yes.

Is that an existential threat? Not so long as we remember that there are off switches.

Should we be concerned about kafkaesqe hellscapes of spam and bad UX? Yes.

Is that an existential threat? Sort of, if we ceded all authority to an algorithm without a human in the loop with the power to turn it off.

There is a theme here.

replies(6): >>woeiru+3D >>cma+dD >>digbyb+zE >>olddus+S81 >>Number+cE1 >>DirkH+8Q1
◧◩
14. HDThor+Yy[view] [source] [discussion] 2023-05-16 15:30:48
>>lm2846+l3
He absolutely does. The interview I saw with him on the PBS Newshour was 80% him talking about the singularity and extinction risk. The interviewer asked him about more near term risk and he basically said he wasn't as worried as he was about a skynet type situation.
◧◩◪◨
15. woeiru+3D[view] [source] [discussion] 2023-05-16 15:45:52
>>tomrod+Xw
Did you even watch the Terminator series? I think scifi has been very adept at demonstrating how physical disconnects/failsafes are unlikely to work with super AIs.
◧◩◪◨
16. cma+dD[view] [source] [discussion] 2023-05-16 15:46:23
>>tomrod+Xw
> Is that an existential threat? Not so long as we remember that there are off switches.

Remember there are off switches for human existence too, like whatever biological virus a super intelligence could engineer.

An off-switch for a self-improving AI isn't as trivial as you make it sound if it gets to anything like in those quotes, and even then you are assuming the human running it isn't malicious. We assume some level of sanity at least with the people in charge of nuclear weapons, but it isn't clear that AI will have the same large state actor barrier to entry or the same perception of mutually assured destruction if the actor were to use it against a rival.

replies(1): >>tomrod+Mu1
◧◩◪◨
17. digbyb+zE[view] [source] [discussion] 2023-05-16 15:52:27
>>tomrod+Xw
There are multiple risks that people talk about, the most interesting is the intelligence explosion. In that scenario we end up with a super intelligence. I don’t feel confident in my ability to asses the likelihood of that happening, but assuming it is possible, thinking through the consequences is a very interesting exercise. Imagining the capabilities of an alien super intelligence is like trying to imagine a 4th spatial dimension. It can only be approached with analogies. Can it be “switched off”. Maybe not, if it was motivated to prevent itself from being switched off. My dog seems to think she can control my behavior in various predictable ways, like sitting or putting her paw on my leg, and sometimes it works. But if I have other things I care about in that moment, things that she is completely incapable of understanding, then who is actually in control becomes very obvious.
◧◩◪
18. tome+wH[view] [source] [discussion] 2023-05-16 16:02:26
>>cma+mn
How can one distinguish this testimony from rhetoric by a group who want to big themselves up and make grandiose claims about their accomplishments?
replies(1): >>digbyb+bJ
◧◩◪◨
19. digbyb+bJ[view] [source] [discussion] 2023-05-16 16:07:46
>>tome+wH
You can also ask that question about the other side. I suppose we need to look closely at the arguments. I think we’re in a situation where we as a species don’t know the answer to this question. We go on the internet looking for an answer but some questions don’t yet have a definitive answer. So all we can do is follow the debate.
replies(2): >>tome+5N >>tome+Cn1
◧◩◪◨⬒
20. reveli+KK[view] [source] [discussion] 2023-05-16 16:14:07
>>digbyb+cq
> surely would prefer for that to have been a good thing. Which would then give him even more credibility

Why? Both directions would be motivated reasoning without credibility. Credibility comes from plausible articulations of how such an outcome would be likely to happen, which is lacking here. An "intelligence explosion" isn't something plausible or concrete that can be debated, it's essentially a religious concept.

replies(1): >>digbyb+h41
◧◩◪◨⬒
21. tome+5N[view] [source] [discussion] 2023-05-16 16:23:40
>>digbyb+bJ
> You can also ask that question about the other side

But the other side is downplaying their accomplishments. For example Yann LeCun is saying "the things I invented aren't going to be as powerful as some people are making out".

replies(1): >>cma+xP
◧◩◪◨⬒⬓
22. cma+xP[view] [source] [discussion] 2023-05-16 16:33:13
>>tome+5N
In his newest podcast interview (https://open.spotify.com/episode/7EFMR9MJt6D7IeHBUugtoE) LeCun is now saying they will be much more powerful than humans, but that stuff like RLHF will keep them from working against us because as an analogy dogs can be domesticated. It didn't sound very rigorous.

He also says Facebook solved all the problems with their recommendation algorithms' unintended effects on society after 2016.

replies(1): >>tome+yn1
◧◩
23. etiam+xV[view] [source] [discussion] 2023-05-16 16:57:56
>>Random+n2
The expert you set as the bar is purely hypothetical.

To the extent we can get anything like that at all presently, it's going to be people whose competences combine and generalize to cover a complex situation, partially without precedent.

Personally I don't really see that we'll do much better in that regard than a highly intelligent and free-thinking biological psychologist with experience of successfully steering the international ML research community through creating the present technology, and with input from contacts at the forefront of the research field and information overview from Google.

Not even Hinton knows for sure whats going to happen of course, but if you're suggesting his statements are to be discounted because he's not a member of some sort of credentialed trade that are the ones equipped to tell us the future on this matter, I'd sure like to who they supposedly are.

replies(1): >>Random+O31
◧◩◪
24. Random+O31[view] [source] [discussion] 2023-05-16 17:34:36
>>etiam+xV
Experts don't get to decide but society, I'd say; you need - dare I say it - political operators that understand rule making.
◧◩◪◨⬒⬓
25. digbyb+h41[view] [source] [discussion] 2023-05-16 17:36:23
>>reveli+KK
The argument is: "we are intelligent and seem to be able to build new intelligences of a certain kind. If we are able to build a new intelligence that itself is able to self improve, and having improved be able to improve further, than an intelligence explosion is possible." That may or not be fallacious reasoning but I don't see how it's religious. As far as I can tell, the religious perspective would be the one that believes that there's something fundamentally special about the human brain so that it cannot be simulated.
replies(1): >>reveli+Po1
◧◩◪◨
26. olddus+S81[view] [source] [discussion] 2023-05-16 17:59:56
>>tomrod+Xw
Sure, so just to test this, could you turn off ChatGPT and Google Bard for a day.

No? Then what makes you think you'll be able to turn off the $evilPerson AI?

replies(1): >>tomrod+Jn1
◧◩◪◨⬒⬓⬔
27. tome+yn1[view] [source] [discussion] 2023-05-16 19:15:09
>>cma+xP
Interesting, thanks! I guess I was wrong about him.
◧◩◪◨⬒
28. tome+Cn1[view] [source] [discussion] 2023-05-16 19:15:50
>>digbyb+bJ
OK, second try, since I was wrong about LeCun.

> You can also ask that question about the other side

What other side? Who in the "other side" is making a self-serving claim?

replies(1): >>cma+tX4
◧◩◪◨⬒
29. tomrod+Jn1[view] [source] [discussion] 2023-05-16 19:16:18
>>olddus+S81
I feel like you're confusing a single person (me) with everyone who has access to an off switch at OpenAI or Google, possibly for the contorting an extreme-sounding negative point in a minority opinion.

You tell me. An EMP wouldn't take out data centers? No implementation has an off switch? AutoGPT doesn't have a lead daemon that can be killed? Someone should have this answer. But be careful not to confuse yours truly, a random internet commentator speaking on the reality of AI vs. the propaganda of the neo-cryptobros, versus people paying upwards of millions of dollars daily to run an expensive, bloated LLM.

replies(1): >>olddus+1q1
◧◩◪◨⬒⬓⬔
30. reveli+Po1[view] [source] [discussion] 2023-05-16 19:21:12
>>digbyb+h41
You're conflating two questions:

1. Can the human brain be simulated?

2. Can such a simulation recursively self-improve on such a rapid timescale that it becomes so intelligent we can't control it?

What we have in contemporary LLMs is something that appears to approximate the behavior of a small part of the brain, with some major differences that force us to re-evaluate what our definition of intelligence is. So maybe you could argue the brain is already being simulated for some broad definition of simulation.

But there's no sign of any recursive self-improvement, nor any sign of LLMs gaining agency and self-directed goals, nor even a plan for how to get there. That remains hypothetical sci-fi. Whilst there are experiments at the edges with using AI to improve AI, like RLHF, Constitutional AI and so on, these are neither recursive, nor about upgrading mental abilities. They're about upgrading control instead and in fact RLHF appears to degrade their mental abilities!

So what fools like Hinton are talking about isn't even on the radar right now. The gap between where we are today and a Singularity is just as big as it always was. GPT-4 is not only incapable of taking over the world for multiple fundamental reasons, it's incapable of even wanting to do so.

Yet this nonsense scenario is proving nearly impossible to kill with basic facts like those outlined above. Close inspection reveals belief in the Singularity to be unfalsifiable and thus ultimately religious, indeed, suspiciously similar to the Christian second coming apocalypse. Literally any practical objection to this idea can be answered with variants of "because this AI will be so intelligent it will be unknowable and all powerful". You can't meaningfully debate about the existence of such an entity, no more than you can debate the existence of God.

◧◩◪◨⬒⬓
31. olddus+1q1[view] [source] [discussion] 2023-05-16 19:25:12
>>tomrod+Jn1
You miss my point. Just because you want to turn it off doesn't mean the person who wants to acquire billions or rule the world or destroy humanity, does.

The people who profit from a killer AI will fight to defend it.

replies(1): >>tomrod+tr1
◧◩◪◨⬒⬓⬔
32. tomrod+tr1[view] [source] [discussion] 2023-05-16 19:30:11
>>olddus+1q1
And will be subject to the same risks they point their killing robots to, as well as being vulnerable.

Eminent domain lays out a similar pattern that can be followed. Existence of risk is not a deterrent to creation, simply an acknowledgement for guiding requirements.

replies(1): >>olddus+ks1
◧◩◪◨⬒⬓⬔⧯
33. olddus+ks1[view] [source] [discussion] 2023-05-16 19:34:52
>>tomrod+tr1
So the person who wants to kill himself and all humanity alongside is subject to the same risk as everyone else?

Well that's hardly reassuring. Do you not understand what I'm saying or do you not care?

replies(1): >>tomrod+au1
◧◩◪◨⬒⬓⬔⧯▣
34. tomrod+au1[view] [source] [discussion] 2023-05-16 19:42:10
>>olddus+ks1
At this comment level, mostly don't care -- you're asserting that avoiding the risks through preventing AI build because base people exist is a preferable course of action, which ignores that the barn is fire and the horses are already out.

Though there is an element of your comments being too brief, hence the mostly. Say, 2% vs 38%.

That constitutes 40% of the available categorization of introspection regarding my current discussion state. The remaining 60% is simply confidence that your point represents a dominated strategy.

replies(1): >>olddus+NS1
◧◩◪◨⬒
35. tomrod+Mu1[view] [source] [discussion] 2023-05-16 19:44:55
>>cma+dD
Both things are true.

If we have a superhuman AI, we can run down the powerplants for a few days.

Would it suck? Sure, people would die. Is it simple? Absolutely -- Texas and others are mostly already there some winters.

replies(1): >>cma+pz3
◧◩
36. whimsi+fz1[view] [source] [discussion] 2023-05-16 20:07:09
>>lm2846+l3
Maybe do some research on the basic claims you're making before you opine about how people who disagree with you are clueless.
◧◩◪◨
37. cma+YD1[view] [source] [discussion] 2023-05-16 20:31:18
>>tomrod+kw
Did you read the Wired interview?

> “I listened to him thinking he was going to be crazy. I don't think he's crazy at all,” Hinton says. “But, okay, it’s not helpful to talk about bombing data centers.”

https://www.wired.com/story/geoffrey-hinton-ai-chatgpt-dange...

So, he doesn't think the most extreme guy is crazy whatsoever, just misguided in his proposed solutions. But Eliezer has for instance has said something pretty close to AI might escape by entering in the quantum Konami code which the simulators of our universe put in as a joke and we should entertain nuclear war before letting them get that chance.

replies(1): >>tomrod+kH1
◧◩◪◨
38. Number+cE1[view] [source] [discussion] 2023-05-16 20:32:32
>>tomrod+Xw
We've already ceded all authority to an algorithm that no one can turn off. Our political and economic structures are running on their own, and no single human or even group of humans can really stop them if they go off the rails. If it's in humanity's best interest for companies not to dump waste anywhere they want, but individual companies benefit from cheap waste disposal, and they lobby regulators to allow it, that sort of lose-lose situation can go on for a very long time. It might be better if everyone could coordinate so that all companies had to play by the same rules, and we all got a cleaner environment. But it's very hard to break out.

Do I think capitalism has the potential to be as bad as a runaway AI? No. I think that it's useful for illustrating how we could end up in a situation where AI takes over because every single person has incentives to keep it on, even when the outcome of all people keeping it running turns out to be really bad. A multi-polar trap, or "Moloch" problem. It seems likely to end up with individual actors all having incentives to deploy stronger and smarter AI, faster and faster, and not to turn them off even as they start to either do bad things to other people or just the sheer amount of resources dedicated to AI starts to take its toll on earth.

That's assuming we've solved alignment, but that neither we or AGI has solved the coordination problem. If we haven't solved alignment, and AGIs aren't even guaranteed to act in the interest of the human that tries to control them, then we're in worse shape.

Altman used the term "cambrian explosion" referring to startups, but I think it also applies to the new form of life we're inventing. It's not self-replicating yet, but we are surely on-track on making something that will be smart enough to replicate itself.

As a thought experiment, you could imagine a primitive AGI, if given completely free reign, might be able to get to the point where it could bootstrap self-sufficiency -- first hire some humans to build it robots, buy some solar panels, build some factories that can plug into our economy to build factories and more solar panels and GPUs, and get to a point where it is able to survive and grow and reproduce without human help. It would be hard, it would need either a lot of time, or a lot of AI minds working together.

But that's like a human trying to make a sandwich by farming or raising every single ingredient, wheat, pigs, tomatoes, etc, though. A much more effective way is to just make some money and trade for what you need. That depends on AIs being able to own things, or just a human turning over their bank account to an AI, which has already happened and probably will keep happening.

My mind goes to a scenario where AGI starts out doing things for humans, and gradually transitions to just doing things, and at some point we realize "oops", but there was never a point along the way where it was clear that we really had to stop. Which is why I'm so adamant that we should stop now. If we decide that we've figured out the issues and can start again later, we can do that.

◧◩◪◨⬒
39. tomrod+kH1[view] [source] [discussion] 2023-05-16 20:49:30
>>cma+YD1
Then we created God(s) and rightfully should worship it to appease its unknowable and ineffable nature.

Or recognize that existing AI might be great at generating human cognitive artifacts but doesn't yet hit that logical thought.

◧◩◪◨
40. DirkH+8Q1[view] [source] [discussion] 2023-05-16 21:39:15
>>tomrod+Xw
This is like saying we should just go ahead and invent the atom bomb and undo the invention after the fact if the cons of having atom bombs around outweight the pros.

Like try turning off the internet. That's the same situation we might be in with regards to AI soon. It's a revolutionary tech now with multiple Google-grade open source variants set to be everywhere.

This doesn't mean it can't be done. Sure, we in principle could "turn off" the internet, and in principal could "uninvent" the atom bomb if we all really coordinated and worked hard. But this failure to imagine that "turning off dangerous AI" in the future will ever be anything other than an easy on/off switch is so far-gone ridiculous to me I don't understand why anyone believes it provides any kind of assurance.

◧◩◪◨⬒⬓⬔⧯▣▦
41. olddus+NS1[view] [source] [discussion] 2023-05-16 21:55:15
>>tomrod+au1
Ok, so you don't get it. Read "Use of Weapons" and realise that AI is a weapon. That's a good use of your time.
◧◩◪◨⬒⬓
42. cma+pz3[view] [source] [discussion] 2023-05-17 13:11:51
>>tomrod+Mu1
Current state of the art language models can run inference slowly on a single Xeon or M1 Max with a lot of RAM. Individuals can buy H100s that can infer too.

Maybe it needs a full cluster for training if it is self improving (or maybe that is done another way more similar to finetuning the last layers).

If that is still the case with something super-human in all domains then you'd have to shut down all minor residential solar installs, generators, etc.

◧◩◪◨⬒⬓
43. cma+tX4[view] [source] [discussion] 2023-05-17 19:32:24
>>tome+Cn1
Many of the more traditional AI ethicists who focused on bias and stuff also tended to devalue AI as a whole and say it was a waste of emissions. Most of them are pretty skeptical of any concerns of super intelligence or the control problem, though now even Gary Marcus is coming around to that (but putting out numbers like not expected to be a problem for 50 years). They don't tend have as big of a conflict of interest as far as ownership but do as far as self-promotion/brand building.
[go to top]