zlacker

[parent] [thread] 8 comments
1. mejuto+(OP)[view] [source] 2023-07-05 23:11:55
Your arguments apply to other fields, like genetic modifications, yet there it does not reach the same conclusions.

Your post appeals to science and logic, yet it makes huge assumptions. Other posters mention how an AI would interface with the physical world. While we all know cool cases like stuxnet, robotics has serious limitations and not everything is connected online, much less without a physical override.

As a thought experiment lets think of a similar past case: the self-driving optimism. Many were convinced it was around the corner. Many times I heard the argument that "a few deaths were ok" because overall self-driving would cause less accidents, an argument in favor of preventable deaths based on an unfounded tech belief. Yet nowadays 100% self-driving has stalled because of legal and political reasons.

AI actions could similarly be legally attributed to a corporation or individual, like we do with other tools like knives or cranes, for example.

IMHO, for all the talk about rationality, tech fetishism is rampant, and there is nothing scientific about it. Many people want to play with shiny toys, consequences be dammed. Let’s not pretend that is peak science.

replies(2): >>nopins+iQ >>arisAl+x31
2. nopins+iQ[view] [source] 2023-07-06 05:43:59
>>mejuto+(OP)
Genetic modifications could potentially cause havoc in the long run as well, but it's much more likely we have time to detect and thwart their threats. The major difference is speed.

Even if we knew how to create a new species of superintelligent humans who have goals misaligned with the rest of humanity, it would take them decades to accumulate knowledge, propagate themselves to reach a sufficient number, and take control of resources, to pose critical dangers to the rest.

Such constraints are not applicable to superintelligent AIs with access to the internet.

replies(1): >>mejuto+3Z
◧◩
3. mejuto+3Z[view] [source] [discussion] 2023-07-06 07:03:31
>>nopins+iQ
Counterexample: covid.

Assumptions:

- Genetic modification as danger needs to be in the form of a big number of smart humans (where did that come from?)

- AI is not physically constrained

> it's much more likely we have time to detect and thwart their threats.

Why? Counterexample: covid.

> Even if we knew how to create a new species of superintelligent humans who have goals misaligned with the rest of humanity, it would take them decades to accumulate knowledge, propagate themselves to reach a sufficient number, and take control of resources, to pose critical dangers to the rest.

Why insist on some superinteligent and human, and suficient number. A simple virus could be a critical danger.

replies(1): >>nopins+mD4
4. arisAl+x31[view] [source] 2023-07-06 07:45:27
>>mejuto+(OP)
But wait you are making my argument:

1) progress was stopped due to regulation which is what we are talking about is needed

2) that was done after a few deaths

3) we agree that self driving can be done but its currently stalled. Likewise we do not disagree that AGI is possible right?

We do not have the luxury to have a few deaths from a rogue AI because it may be the end.

replies(1): >>mejuto+N61
◧◩
5. mejuto+N61[view] [source] [discussion] 2023-07-06 08:13:24
>>arisAl+x31
I do not think you made those arguments before.

I agree in spirit with the person you were responding too. AI lacks the physicality to be a real danger. It can be a danger because of bias or concentration of power (what regulations are trying to do, regulatory capture) but not because AI will paperclip-optimize us. People or corporations using AI will still be legally responsible (like with cars, or a hammer).

It lacks the physicality for that, and we can always pull the plug. AI is another tool people will use. Even now it is neutered to not give bad advice, etc.

These fantasies about AGI are distracting us (again agreeing with OP here) from the real issues of inequality and bias that the tool perpetuates.

replies(1): >>arisAl+wa1
◧◩◪
6. arisAl+wa1[view] [source] [discussion] 2023-07-06 08:42:45
>>mejuto+N61
> and we can always pull the plug.

No we can't and there is a humongous amount of literature you have not read. As I pointed in another comment, thinking that you found a solution by "pulling the plug" while all the top scientists have spent years contemplating the dangers is extremely narcissistic behavior. "hey guys, did you think about pulling the plug before quitting jobs and spending years and doing interviews and writing books"?

replies(1): >>mejuto+ne1
◧◩◪◨
7. mejuto+ne1[view] [source] [discussion] 2023-07-06 09:14:34
>>arisAl+wa1
You are appealing to authority (and ad hominem) without giving an argument.

I respectfully disagree, and will remove myself from this conversation.

replies(1): >>arisAl+2k1
◧◩◪◨⬒
8. arisAl+2k1[view] [source] [discussion] 2023-07-06 10:01:39
>>mejuto+ne1
There is a problem. You say the problem can be solved by X without any proof while scientists just say we do not now how to solve it. You need to prove your extraordinary claim and be 100% certain otherwise your children die.
◧◩◪
9. nopins+mD4[view] [source] [discussion] 2023-07-07 02:34:49
>>mejuto+3Z
We do have regulations and laws to control genetic modifications of pathogens. They are done in highly secure labs and the access is not widely available to anyone.

If a pathogen more deadly than Covid starts to spread, eg like Ebola or Smallpox, we would have done more to limit its spread. If it’s good at hiding from detection for a while, it could potentially cause a catastrophe but most likely will not wipe out humanity because it is not intelligent and some surviving humans will eventually find a way to thwart it or limit its impact.

A pathogen is also physically constrained by available hosts. Yes, current AI also requires processors but it’s extremely hard or nearly impossible to limit contact with CPUs & GPUs in the modern economy.

[go to top]