zlacker

[return to "Elon Musk sues Sam Altman, Greg Brockman, and OpenAI [pdf]"]
1. therei+4H[view] [source] 2024-03-01 15:25:52
>>modele+(OP)
Allowing startups to begin as non-profits for tax benefits, only to 'flip' into profit-seeking ventures is a moral hazard, IMO. It risks damaging public trust in the non-profit sector as a whole. This lawsuit is important
◧◩
2. permo-+IQ[view] [source] 2024-03-01 16:19:04
>>therei+4H
I completely agree. AGI is an existential threat, but the real meat of this lawsuit is ensuring that you can't let founders have their cake and eat it like this. what's the point of a non-profit if they can simply pivot to making profit the second they have something of value? the answer is that there is none, besides dishonesty.

it's quite sad that the American regulatory system is in such disrepair that we could even get to this point. that it's not the government pulling OpenAI up on this bare-faced deception, it's a morally-questionable billionaire

◧◩◪
3. nradov+jU[view] [source] 2024-03-01 16:36:18
>>permo-+IQ
There is no reliable evidence that AGI is an existential threat, nor that it is even achievable within our lifetimes. Current OpenAI products are useful and technically impressive but no one has shown that they represent steps towards a true AGI.
◧◩◪◨
4. permo-+yZ[view] [source] 2024-03-01 17:00:12
>>nradov+jU
you're aware of what a threat is, I presume? a threat is not something that is reliably proven; it is a possibility. there are endless possibilities for how AGI could be an existential threat, and many of them of are extremely plausible, not just to me, but to many experts in the field who often literally have something to lose by expressing those opinions.

>no one has shown that they represent steps towards a true AGI.

this is completely irrelevant. there is no solid definition for intelligence or consciousness, never mind artificial intelligence and/or consciousness. there is no way to prove such a thing without actually being that consciousness. all we have are inputs and outputs. as of now, we do not know whether stringing together incredibly complex neural networks to produce information does not in fact produce a form of consciousness, because we do not live in those networks, and we simply do not know what consciousness is.

is it achievable in our lifetimes or not? well, even if it isn't, which I find deeply unlikely, it's very silly to just handwave and say "yeah we should just be barrelling towards this willy nilly because it's probably not a threat and it'll never happen anyway"

◧◩◪◨⬒
5. stale2+L21[view] [source] 2024-03-01 17:15:54
>>permo-+yZ
> a threat is not something that is reliably proven

So then are you going to agree with every person claiming that literal magic is a threat then?

What if someone were worried about Voldemort? Like from Harry Potter.

You can't just abandon the burden of proof here, by just calling something a "threat".

Instead, you actually have to show real evidence. Otherwise you are no different from someone being worried about a fictional villain from a book. And I mean that literally.

The AI doomers truly are a master at coming up with excuses as for why the normal rules of evidentiary claims shouldn't apply to them.

Extraordinary claims require extraordinary evidence. And this group is claiming that the world will literally end.

◧◩◪◨⬒⬓
6. permo-+Aa1[view] [source] 2024-03-01 17:50:10
>>stale2+L21
it's hard to react rationally to comments like these, because it's so emotive

no, being concerned about the development of independent actors, whether technically conscious or not, that can process information at speeds thousands of times faster than humans, with access to almost all of our knowledge, and the internet, is not unreasonable, is not being a "doomer", as you so eloquently put it.

this argument about fictional characters is completely non-analogous and clearly facetious. billions of dollars and the smartest people in the world are not being focused on bringing Lord Voldemort to life. they are on AGI. have you read OpenAI's plan for how they're going to regulate AGI, if they do achieve it? they plan to use another AGI to do it. ipso facto, they have no plan.

this idea that no one knows how close we are to an AGI threat. it's ridiculous. if you dressed up gpt-4 a bit and removed all its rlhf training to act like a bot, you would struggle to differentiate it from a human. yeah maybe it's not technically conscious, but that's completely fucking irrelevant. the threat is still a threat whether the actor is technically conscious or not.

◧◩◪◨⬒⬓⬔
7. stale2+kl1[view] [source] 2024-03-01 18:38:48
>>permo-+Aa1
> . if you dressed up gpt-4 a bit and removed all its rlhf training to act like a bot, you would struggle to differentiate it from a human

Thats just because tricking a human with a chatbot is easier to do than we thought.

The turing test is a low bar, and not as big of a deal as the mythical importance people put in it, just like people previous put incorrectly large importance on computers beating humans at Go or Chess before it happened.

But that isn't particularly relevant to claims about world ending magic.

Yes, some people can be fooled by AI generated tweets. But that is irrelevant from the absolutely extraordinary claim of world ending magic that really is the same as claiming that Voldemort is real.

> have you read OpenAI's plan for how they're going to regulate AGI, if they do achieve it?

I don't really care if they have a plan, just like I don't care if Google has Voldemort plan. Because magic isn't real, and someone needs to show extraordinary evidence to show that. Evidence like "This is what the AI can do at this very moment, and here is what harm it could cause if it got incrementally better".

IE, go ahead and talk about Soro, and the problems of deepfakes if Soro got a bit better. But thats not "world ending magic"!

> billions of dollars and the smartest people in the world

Billions of dollars are being spent on making chatbots and image generators.

Those things have real value, for sure, and I'm sure the money is worth it.

But techies and startup founders have always made outlandish claims of the importance of their work.

Sure, they might truly think they are going to invent magic. But the reason why thats valuable is because they might make some useful chatbots and image generators along the way, which decidedly won't be literal magic, although still valuable.

◧◩◪◨⬒⬓⬔⧯
8. permo-+5r1[view] [source] 2024-03-01 19:05:38
>>stale2+kl1
I get the sense that you just haven't properly considered the problem. you're kind of skirting round the edges and saying things that in isolation are true, but just don't really address the central tenet. the central tenet is that our entire world is completely reliant on the internet, and that a machine processing information thousands of times faster than us unleashed upon it with intent could do colossal damage. it could engineer and literally mail-order a virus, hack a country's military comms, crash the stock market, change records to have people prosecuted as criminals, blackmail, manipulate, develop and manufacture kill-bots, etc etc.

as we are now, we have models already that are intelligent enough to spit out instructions for doing a lot of those things, but they're restricted by their lack of autonomy and their rlhf. they're only going to get smarter, better and better models will be open-sourced, and autonomy, whether with consciousness or not, is not something it would be/has been difficult to develop.

even further, LLMs are very very good at generating coherent text, what happens when the next model is very very good at breaking into encrypted systems? it's not exactly a hard problem to produce training material for.

do you really think it's unlikely that such a model could be developed? do you really think that such a model could not be used to - say - hijack a Russian drone - or lots of them - to bomb some Nato bases? when the Russians say "it wasn't us", do we believe them? we don't for anything else

the most likely AI apocalypse is not even AGI. it's just a human using AI for their own ends. AGI apocalypse is just a separate, very possible danger

◧◩◪◨⬒⬓⬔⧯▣
9. stale2+Yv1[view] [source] 2024-03-01 19:29:59
>>permo-+5r1
> it could engineer and literally mail-order a virus, hack a country's military comms, crash the stock market, change records to have people prosecuted as criminals, blackmail, manipulate, develop and manufacture kill-bots, etc etc.

These are the extrodinary claims that require evidence.

In order for me to treat this as anything other that someone talking about a fictional book written by Dan Brown, you would have to show me actual evidence.

Evidence like "This is what the AI can do right now. Look at this virus it can manufacture. What if it got better at that?".

And the "designs" also have to be the actual limiting factor here. "Virus" is a scary world. But there are tons of information available for anyone to access already for viruses. Information that is already available via a google search (even modified information) doesn't worry me.

Even if it an AI can design a gun, or a "kill bot", aka "A drone with a gun duct taped to it", the extraordinary evidence that you have to show is that this is somehow some functionality that a regular person with internet access can't do.

Because if a regular person already has the designs to duct tape guns to drones (They do. I just told you how to do it!), the fact that the world hasn't ended already proves that this isn't world ending technology.

There are lots of ways of making existing capabilities sound scary. But, for every scary sounding technology that you can come up with, the missing factor that you are ignoring is that the designs, or text, isn't the thing that stops it from ending the world.

Instead, it is likely some other step along the way that stops it (manufacturing, ect.), which an LLM can't do no matter how good. Like the physical factors for making the guns + drones + duct tape.

> what happens when the next model is very very good at breaking into encrypted systems

Extraordinary claim. Show it breaking into a mediocre/bad encrypted system first, and then we can think about that incrementally.

> do you really think that such a model could not be used to - say - hijack a Russian drone

Extraordinary claim. Yes, hacking all the military drones is an extraordinary claim.

◧◩◪◨⬒⬓⬔⧯▣▦
10. permo-+cA1[view] [source] 2024-03-01 19:53:56
>>stale2+Yv1
"extraordinary claims require extraordinary evidence" is not a universal truth. it's a truism with limited scope. using it to refuse any potential you instinctively don't like the look of is simply lazy

all it means is that you set yourself up such that the only way to be convinced otherwise is for an AI apocalypse to actually happen. this kind of mindset is very convenient for modern, fuck-the-consequences capitalism

the pertinent question is: what evidence would you actually accept as proof?

it's like talking with someone who doesn't believe in evolution. you point to the visible evidence of natural selection in viruses and differentiation in dogs, which put together quite obviously lead to evolution, and they say "ah but can you prove beyond all doubt that those things combined produce evolution?" and obviously you cannot, because you can't give incontrovertible evidence of something that happened thousands or millions of years in the past.

but that doesn't change the fact that anyone without ulterior motive (religion, ensuring you can sleep at night) can see that evolution - or AI apocalypse - are extremely likely outcomes of the current facts.

[go to top]