zlacker

[parent] [thread] 77 comments
1. ilaksh+(OP)[view] [source] 2023-07-05 18:14:37
You have to give them credit for putting their money where their mouth is here.

But it's also easy to parody this. I am just imagining Ilya and Jan coming out on stage wearing red capes.

I think George Hotz made sense when he pointed out that the best defense will be having the technology available to everyone rather than a small group. We can at least try to create a collective "digital immune system" against unaligned agents with our own majority of aligned agents.

But I also believe that there isn't any really effective mitigation against superintelligence superseding human decision making aside from just not deploying it. And it doesn't need to be alive or anything to be dangerous. All you need is for a large amount of decision-making for critical systems to be given over to hyperspeed AI and that creates a brittle situation where things like computer viruses can be existential risks. It's something similar to the danger of nuclear weapons.

Even if you just make GPT-4 say 33% smarter and 50 or 100 times faster and more efficient, that can lead to control of industrial and military assets being handed over to these AI agents. Because the agents are so much faster, humans cannot possibly compete, and if you interrupt them to try to give them new instructions then your competitor's AIs race ahead the equivalent of days or weeks of work. This, again, is a precarious situation to be in.

There is huge promise and benefit from making the systems faster, smarter, and more efficient, but in the next few years we may be walking a fine line. We should agree to place some limitation on the performance level of AI hardware that we will design and manufacture.

replies(8): >>Jimthe+B >>goneho+32 >>sagebi+S3 >>arisAl+Uz >>c_cran+kF >>skybri+ii1 >>dlkf+GD1 >>isaacf+252
2. Jimthe+B[view] [source] 2023-07-05 18:16:28
>>ilaksh+(OP)
"Even if you just make GPT-4 say 33% smarter and 50 or 100 times faster and more efficient, that can lead to control of industrial and military assets being handed over to these AI agents."

I call BS on this...it's an LLM...

replies(6): >>crop_r+Q1 >>chaxor+V4 >>Footke+ib >>ben_w+b61 >>reaper+gH1 >>nopins+iO1
◧◩
3. crop_r+Q1[view] [source] [discussion] 2023-07-05 18:21:07
>>Jimthe+B
Saying it's "an LLM" doesn't change the impact. GPT4 is an LLM, and so are many others ranging from toy quality to GPT3.5. It is very clear GPT4 is much better. If there is another jump like GPT4 , whether it is LLM or not, it's impact will be huge.
replies(2): >>esafak+H5 >>woadwa+ZM
4. goneho+32[view] [source] 2023-07-05 18:22:10
>>ilaksh+(OP)
The recent paper about using gpt-4 to give more insight into its actual internals was interesting, but yeah the risks seem really high at the moment that we'd accidentally develop unaligned AGI before figuring out alignment.

Out of the options to reduce that risk I think it would really take something like this, which also seems extremely unlikely to actually happen given the coordination problem: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-no...

You talk about aligned agents - but there aren't any today and we don't know how to make them. It wouldn't be aligned agents vs. unaligned, it's only unaligned.

I don't think spreading out the tech reduces the risk. Spreading out nuclear weapons doesn't reduce the risk (and with nukes at least it's a lot easier to control the fissionable materials). Even with nukes you can still create them and decide not to use them, not so true with superintelligent AGI.

If anyone could have made nukes from their computer humanity may not have made it.

I'm glad OpenAI understands the severity of the problem though and is at least trying to solve it in time.

replies(2): >>lukesc+QH >>Dennis+iJ
5. sagebi+S3[view] [source] 2023-07-05 18:29:27
>>ilaksh+(OP)
It’s not money where their mouth is…

It’s paying a cost of doing business. The minimum theater required to minimize expected regulatory cost.

They want to own the saftey issue so they can risk your life for their profit.

replies(1): >>janals+2b2
◧◩
6. chaxor+V4[view] [source] [discussion] 2023-07-05 18:33:02
>>Jimthe+B
It's important to recognize that the model is fully capable of operating in open world environments, with visual stimuli and motor output, go achieve high level tasks. This has been demonstrated in proofs of concepts several times now with systems such as voyager et al. So, while there are certainly some details that are important, much of them are the annoyances that we devs deal with all the time (how to connect various parts of a system properly, etc) the fundamental capabilities of expressivity in these models are not that limited. Certainly limited in some sense (as seen in the several papers applying category theoretic arguments to transformers) but for many engineering applications in the world, these models are very capable and useful.

Guarantees of correctness and safety are obviously of huge concern, hence the main article. But it's absolutely not unreasonable to see these models allowing humanoid robots capable of various day to day activities and work.

replies(3): >>Dennis+8v >>jgalt2+r91 >>hgsgm+Zh1
◧◩◪
7. esafak+H5[view] [source] [discussion] 2023-07-05 18:35:22
>>crop_r+Q1
Plus the next thing might not be an LLM.
◧◩
8. Footke+ib[view] [source] [discussion] 2023-07-05 18:56:09
>>Jimthe+B
Military command and control is already performed via input and output of token streams.
◧◩◪
9. Dennis+8v[view] [source] [discussion] 2023-07-05 20:24:17
>>chaxor+V4
To save others the trouble, I googled Voyager, it's pretty interesting. I had no idea an LLM could do this sort of thing:

https://voyager.minedojo.org/

replies(2): >>famous+9C >>yldedl+Z12
10. arisAl+Uz[view] [source] 2023-07-05 20:48:20
>>ilaksh+(OP)
"And it doesn't need to be alive or anything to be dangerous"

Why are tech people stuck in the now and not future looking?

replies(1): >>ilaksh+GK
◧◩◪◨
11. famous+9C[view] [source] [discussion] 2023-07-05 20:58:44
>>Dennis+8v
Other examples(in the real world) you might find interesting.

https://tidybot.cs.princeton.edu/ https://innermonologue.github.io/

https://palm-e.github.io/

https://www.microsoft.com/en-us/research/group/autonomous-sy...

replies(1): >>Animat+VE
◧◩◪◨⬒
12. Animat+VE[view] [source] [discussion] 2023-07-05 21:11:31
>>famous+9C
> https://palm-e.github.io/

The alignment problem will come up when the robot control system notices that the guy with the stick is interfering with the robot's goals.

replies(1): >>c_cran+DG
13. c_cran+kF[view] [source] 2023-07-05 21:13:22
>>ilaksh+(OP)
Control of military and industrial assets won't be handed willy nilly to AIs, given the threat of legal liability for any mistakes the AI could make. Their tendency of making things up is well known by now.
replies(2): >>mhb+oW >>sdento+r01
◧◩◪◨⬒⬓
14. c_cran+DG[view] [source] [discussion] 2023-07-05 21:18:45
>>Animat+VE
A robot control system without a mechanical override in favor of the stick is a poor one indeed.
◧◩
15. lukesc+QH[view] [source] [discussion] 2023-07-05 21:24:20
>>goneho+32
Unaligned doesn't really seem like it should be a threat. If it's unaligned it can't work toward any goal. The danger is that it aligns with some anti-goal. If you've got a bunch of agents all working unaligned, they will work at cross-purposes and won't be able to out-think us.
replies(3): >>jdasdf+aK >>ALittl+GL >>babysh+xo1
◧◩
16. Dennis+iJ[view] [source] [discussion] 2023-07-05 21:31:59
>>goneho+32
What is the "recent paper about using gpt-4 to give more insight into its actual internals?"
replies(1): >>famous+6S
◧◩◪
17. jdasdf+aK[view] [source] [discussion] 2023-07-05 21:36:57
>>lukesc+QH
This is a misunderstanding of what AI alignment problems are all about.

Alignment != capability

Think a paperclip maximizing robot that in its process of creating paperclips kills everyone on earth to turn them into paperclips.

replies(2): >>climat+si1 >>lukesc+Ew2
◧◩
18. ilaksh+GK[view] [source] [discussion] 2023-07-05 21:39:26
>>arisAl+Uz
I just think it's much easier to convince people that existing types of AIs will get somewhat smarter and significantly faster. And that's dangerous enough.

My own belief is that regardless of what we do in terms of the most immediate dangers, within one or two centuries (maximum) we will enter the posthuman era where digital intelligent life has taken control of the planet. I don't mean "posthuman" as in all of the humans have been killed (necessarily), just that what humans 1.0 do won't be very important or interesting relative to what the superintelligent AIs are doing.

I don't think there is anything that prevents people from giving AI all of the characteristics of animals (such as humans). I think it's foolish, but researchers seem determined to do it.

But this is fairly speculative and much harder to convince people of.

replies(1): >>c_cran+2L
◧◩◪
19. c_cran+2L[view] [source] [discussion] 2023-07-05 21:41:35
>>ilaksh+GK
If the value of superintelligence is to lead to an Age of Em scenario where AIs (or Ems) do most of the intellectual labor, the reality is still that they would be doing this labor in service of humans. I could see a scenario where it is done in service of the AIs instead, but it would look nothing like the existential risk stuff bandied about by these weenies.
replies(2): >>Footke+sB1 >>flagra+mD1
◧◩◪
20. ALittl+GL[view] [source] [discussion] 2023-07-05 21:45:21
>>lukesc+QH
Alignment is about agreement with human preferences and desires, not internal consistency. e.g. An AI that wanted to exterminate humanity could work towards that goal, but it would be unaligned (unaligned with humanity). Alignment is basically making sure humanity is fine with what the AI does.
replies(1): >>hgsgm+Bh1
◧◩◪
21. woadwa+ZM[view] [source] [discussion] 2023-07-05 21:52:22
>>crop_r+Q1
Meanwhile, GPT-4 still can’t reliably multiply small numbers.

https://arxiv.org/abs/2304.02015

replies(4): >>fprott+WP >>mhb+bV >>famous+KV >>Camper+sX
◧◩◪◨
22. fprott+WP[view] [source] [discussion] 2023-07-05 22:08:56
>>woadwa+ZM
A minor inconvenience when GPT-4 has no problem learning how to use a code interpreter.
◧◩◪
23. famous+6S[view] [source] [discussion] 2023-07-05 22:20:38
>>Dennis+iJ
https://openai.com/research/language-models-can-explain-neur...
◧◩◪◨
24. mhb+bV[view] [source] [discussion] 2023-07-05 22:38:36
>>woadwa+ZM
Do you find that comforting when an emergent property of a system whose objective is to complete the next word is able to make drawings?
replies(1): >>Strict+731
◧◩◪◨
25. famous+KV[view] [source] [discussion] 2023-07-05 22:41:37
>>woadwa+ZM
It's alright with algorithmic prompts - https://arxiv.org/abs/2211.09066

also it knows when to use a calculator if it has access to one so it's not a big deal

◧◩
26. mhb+oW[view] [source] [discussion] 2023-07-05 22:45:42
>>c_cran+kF
You've been spewing out nonsense at an impressive pace. Stop digging. Read more, write less.
◧◩◪◨
27. Camper+sX[view] [source] [discussion] 2023-07-05 22:52:34
>>woadwa+ZM
"This Apple II is useless. It can't even run Crysis."
◧◩
28. sdento+r01[view] [source] [discussion] 2023-07-05 23:10:33
>>c_cran+kF
And yet the military regularly hands machine guns to 18 year-olds...
replies(1): >>c_cran+HA2
◧◩◪◨⬒
29. Strict+731[view] [source] [discussion] 2023-07-05 23:25:45
>>mhb+bV
Imagine you meet a human who is eloquent, expressive, speaks ten languages, can pass the bar or the medical board exams easily, but who cannot reliably distinguish between truth and falsehood on the smallest of questions ("what is 6x9? 42") and has no persistent memory or sense of self.

Would you be "comforted" that this mega-genius is worse at arithmetic than you are and doesn't remember what it did yesterday?

Probably not. You might well be worried that this weird psychopath is going to get a medical license and cut the wrong number of fingers off of a whole bunch of patients.

replies(1): >>mhb+441
◧◩◪◨⬒⬓
30. mhb+441[view] [source] [discussion] 2023-07-05 23:32:45
>>Strict+731
We're agreeing, aren't we?
◧◩
31. ben_w+b61[view] [source] [discussion] 2023-07-05 23:45:52
>>Jimthe+B
It's autocomplete on steroids…

That can guide me through the process of writing a Navier-Stokes simulation…

In a foreign language…

That can be trivially put into a loop and tasked with acting like an agent…

And which is good enough that people are already seriously asking themselves if they need to hire people to do certain tasks…

Why call BS?

It's not perfect, sure, but it's not making a highly regional joke about the Isle of White Ferry[0] either.

[0] "What's brown and comes steaming out the back of Cowes?"

replies(1): >>wickof+8c1
◧◩◪
32. jgalt2+r91[view] [source] [discussion] 2023-07-06 00:07:20
>>chaxor+V4
> It's important to recognize that the model is fully capable of operating in open world environment

How so? If they cannot drive a car?

replies(1): >>chaxor+GB9
◧◩◪
33. wickof+8c1[view] [source] [discussion] 2023-07-06 00:26:54
>>ben_w+b61
But you're also autocomplete (prediction engine) on steroids.

https://www.psy.ox.ac.uk/news/the-brain-is-a-prediction-mach...

replies(1): >>ben_w+132
◧◩◪◨
34. hgsgm+Bh1[view] [source] [discussion] 2023-07-06 01:06:27
>>ALittl+GL
Humanity has more than one alignment...
replies(2): >>reduce+VN1 >>ALittl+aZ1
◧◩◪
35. hgsgm+Zh1[view] [source] [discussion] 2023-07-06 01:09:44
>>chaxor+V4
I don't understand why Voyager benefits from being an LLM, vs a "normal" Neural Net. It's not talking to anyone or learning from text.
replies(1): >>Footke+0B1
36. skybri+ii1[view] [source] 2023-07-06 01:13:07
>>ilaksh+(OP)
> the best defense will be having the technology available to everyone rather than a small group

Proliferation of a dangerous technology is the best defense?

Sure, it's a libertarian meme, but it wouldn't work for nuclear weapons or virus research. Maybe that would make sense, but the argument needs to be made.

replies(1): >>flagra+MD1
◧◩◪◨
37. climat+si1[view] [source] [discussion] 2023-07-06 01:15:46
>>jdasdf+aK
Corporations like Saudi Aramco are already doing that. You don't need a superintelligent AI, corporations that maximize profit are already sufficient as misaligned superhuman agents.
replies(1): >>nights+Hw1
◧◩◪
38. babysh+xo1[view] [source] [discussion] 2023-07-06 01:57:50
>>lukesc+QH
And I'm less concerned about emergent alignment with an anti-goal (paperclip optimization) than I am with a scenario like ransomware designed by malicious humans using a super AI aligned with an anti-goal.
◧◩◪◨⬒
39. nights+Hw1[view] [source] [discussion] 2023-07-06 02:58:19
>>climat+si1
You can't maximize profit without customers, they must be aligned with someone.
replies(4): >>climat+9x1 >>WinLyc+ZW1 >>janals+l92 >>ben_w+ao3
◧◩◪◨⬒⬓
40. climat+9x1[view] [source] [discussion] 2023-07-06 03:03:02
>>nights+Hw1
They're aligned with the military-industrial complex. The US military is one of the biggest consumers of fossil fuels[1] and it's the same with other nations and their energy use. So profitable is not the same as aligned with human values.

1: https://en.m.wikipedia.org/wiki/Energy_usage_of_the_United_S...

replies(1): >>flagra+eC1
◧◩◪◨
41. Footke+0B1[view] [source] [discussion] 2023-07-06 03:30:33
>>hgsgm+Zh1
> We introduce Voyager, the first LLM-powered embodied lifelong learning agent to drive exploration, master a wide range of skills, and make new discoveries continually without human intervention in Minecraft. Voyager is made possible through three key modules: 1) an automatic curriculum that maximizes exploration; 2) a skill library for storing and retrieving complex behaviors; and 3) a new iterative prompting mechanism that generates executable code for embodied control.

It looks like being LLM-based is helpful for generating control scripts and communicating its reasoning. Text seems to provide useful building blocks for higher-order reasoning and behavior. As with humans!

◧◩◪◨
42. Footke+sB1[view] [source] [discussion] 2023-07-06 03:34:25
>>c_cran+2L
There is no example in our knowledge of any lifeform prioritizing (writ large) the well-being of a different lifeform over its own.
replies(1): >>c_cran+4A2
◧◩◪◨⬒⬓⬔
43. flagra+eC1[view] [source] [discussion] 2023-07-06 03:40:43
>>climat+9x1
> The US military is one of the biggest consumers of fossil fuels

I guess this phrasing is up for debate, but according to the source linked "the DoD would rank 58th in the world" in fossil fuels.

Is that a huge amount of fossil fuel use? Absolutely. But one of the biggest?

replies(1): >>climat+nI1
◧◩◪◨
44. flagra+mD1[view] [source] [discussion] 2023-07-06 03:48:08
>>c_cran+2L
How do you jump to this? What is it that would inherently lead an intelligent species dramatically smarter than us to stay focused on servicing us?

We humans sure didn't do this. We're genetically extremely similar to other primates and yet we destroy their habitats, throw them in zoos, and use them for lab experiments.

replies(1): >>c_cran+mz2
45. dlkf+GD1[view] [source] 2023-07-06 03:49:52
>>ilaksh+(OP)
> Even if you just make GPT-4 say 33% smarter

What is your unit of intelligence?

◧◩
46. flagra+MD1[view] [source] [discussion] 2023-07-06 03:50:39
>>skybri+ii1
There's nothing libertarian about this. Inventing and securing dangerous technology is an affront to individual freedoms. The idea of proliferation as deterance is totally separate from libertarianism and seeded more in fear than anything else.
replies(1): >>skybri+nf4
◧◩
47. reaper+gH1[view] [source] [discussion] 2023-07-06 04:14:10
>>Jimthe+B
GPT-4 is actually multimodal, not just an LLM. OpenAI just doesn't provide the public with any way to use the image embedding capabilities.
◧◩◪◨⬒⬓⬔⧯
48. climat+nI1[view] [source] [discussion] 2023-07-06 04:21:49
>>flagra+eC1
> According to the 2005 CIA World Factbook, if it were a country, the DoD would rank 34th in the world in average daily oil use, coming in just behind Iraq and just ahead of Sweden.

Sure, the phrasing could be debated but the fact that it even ranks close to actual nation states is already problematic. The US military is basically an entire nation state of its own. This is nothing new if you're old enough to have observed the kind of damage it has done but it demonstrates my point about profit and alignment. Profits are very often misaligned with human values because war is extremely profitable.

replies(2): >>gregw2+5e2 >>flagra+qe4
◧◩◪◨⬒
49. reduce+VN1[view] [source] [discussion] 2023-07-06 05:21:04
>>hgsgm+Bh1
Ya, why do you think there are alarm bells sounding off everywhere right now…

The capabilities are coming fast. There is no alignment.

replies(1): >>janals+ta2
◧◩
50. nopins+iO1[view] [source] [discussion] 2023-07-06 05:23:05
>>Jimthe+B
LLM is already a misnomer. Many of the latest models are better called LFMs (Large Foundation Models). They have multimodal capabilities. Some can even handle sensory input humans can't.

Another comment already links to demos and papers of LFMs operating robots and agents in 3D environments.

◧◩◪◨⬒⬓
51. WinLyc+ZW1[view] [source] [discussion] 2023-07-06 06:38:45
>>nights+Hw1
Bit of an interesting thought experiment there, could a corporation maximize profit without customers? I wonder if we can find any examples of this type of behavior...
◧◩◪◨⬒
52. ALittl+aZ1[view] [source] [discussion] 2023-07-06 06:58:39
>>hgsgm+Bh1
Yes, that's part of the reason why alignment is such a huge problem.

You can imagine an AI that answers questions and helps you get things within reason that doesn't hurt anyone else plus corrections for whatever problems you imagine with this. That's roughly an aligned AI. It will help you build a bomb as a fun experiment, but would stop you from hurting someone with the bomb.

replies(1): >>janals+fa2
◧◩◪◨
53. yldedl+Z12[view] [source] [discussion] 2023-07-06 07:24:26
>>Dennis+8v
Voyager is pretty cool, but it's not transferable to the real world at all. The automatic curriculum relies on lots of specific knowledge from people talking about how to get better at Minecraft. The skill library writes programs using the Mineflayer API, which provides primitives for all physics, entities, actions, state etc. A real-life analogue of that would be like solving robotics and perception real quick.
◧◩◪◨
54. ben_w+132[view] [source] [discussion] 2023-07-06 07:32:16
>>wickof+8c1
"It's one of those irregular verbs, isn't it? I'm good at improv and speaking on my feet, you finish each other's sentences, they're just autocomplete on steroids."

https://en.wikiquote.org/wiki/Yes,_Minister

55. isaacf+252[view] [source] 2023-07-06 07:52:49
>>ilaksh+(OP)
> Even if you just make GPT-4 say 33% smarter and 50 or 100 times faster (...) humans cannot possibly compete

I sincerely doubt that. Gpt4 and it's ilk excel at the 5 paragraph essay on topics that are so well understood by humans that books have been written about them. ChatGPT4 is a very useful took when writing text. But it is useful in the sense that a thesaurus and a spell check use useful.

What chatGPT4 truly sucks at is understanding a large amount of text and synthesizing it. That token limit is really a problem if you want gpt to become a scientist or a military strategist. Strategy requires you to consume a huge amount of less than certain information and to synthesize that in a coherent strategy, preferably explainable in terms potus can understand. Science is the same thing. Play the the Phd game that just featured on HN frontpage. It is a lot of false starts, a lot of reading, again things gpt just cannot do.

By the way their text understanding is really a lot less than human. A nice example are 'word in context' puzzle's. In this puzzle a target word is used in two different sentences. The puzzle is to decide if the word is used in the same meaning or not. chatGpt4 does better than 3.5 but it doesn't take a lot of effort to trick it. Especially if you ask a couple of questions in one prompt, it will easily trip up.

◧◩◪◨⬒⬓
56. janals+l92[view] [source] [discussion] 2023-07-06 08:26:21
>>nights+Hw1
Yes, but a profit maximizer doesn’t need to eliminate all humans to become a big problem.
◧◩◪◨⬒⬓
57. janals+fa2[view] [source] [discussion] 2023-07-06 08:35:03
>>ALittl+aZ1
Apart from some obvious cases that everyone agrees with, alignment is not a big problem it is an incoherent one. It can’t be “solved” any more than the problem of what the best ice cream flavor is can be solved.

Humanity doesn’t have unified interests or shared values on many things. We have different cultural memories and different boundaries. What to some is an expression of a fundamental right is an affront.

replies(1): >>goneho+SC2
◧◩◪◨⬒⬓
58. janals+ta2[view] [source] [discussion] 2023-07-06 08:37:15
>>reduce+VN1
The most likely alignment we will get is the alignment m of money to power.
◧◩
59. janals+2b2[view] [source] [discussion] 2023-07-06 08:41:48
>>sagebi+S3
Call it insurance. It’s the R&D cost to try to make sure your models don’t do/say anything that will get you into trouble.
◧◩◪◨⬒⬓⬔⧯▣
60. gregw2+5e2[view] [source] [discussion] 2023-07-06 09:06:19
>>climat+nI1
US DOD fuel use being the level of Sweden doesn’t seem problematic to my envelope-math; it seems to reflect the size of the entities involved.

Iraq is a now broken third word country/economy in recovery so not a great comparable to US. Sweden is small but a good comparable culturally/development-wise. US is 331 million people. It spends 3% of GDP on military. 3% of 331m is 10 million. Sweden is 10 million people. U.S. military fuel use is in line with Sweden’s.

I could be off here (DOD!=US military?), corrections welcome, but I wouldn’t even be shocked if a military entity uses 3-10x more fuel than a civilian average and above math puts us surprisingly close to 1x.

replies(1): >>climat+cS2
◧◩◪◨
61. lukesc+Ew2[view] [source] [discussion] 2023-07-06 11:43:26
>>jdasdf+aK
No, I understand what you're saying, I just think you're wrong. To be a little clearer: you're assuming a single near-omnipotent agent randomly selects an anti-goal and is capable of achieving it. If we instead create 100 near-omnipotent agents odds are that the majority will be smart enough to recognize that they have to cooperate to achieve any goals at all. Even if the majority have selected anti-goals, it's likely that the majority of the anti-goals will be at cross-purposes. You'll also have a paperclip minimizer, for example. Now, the minimizers are a little scary but these are thought experiments and the goals will not be so simple (nor do I think it would be obvious to anyone including the AIs which ones have selected which goals.) The AIs will have to be liars if they select anti-goals, and they will have to not only lie to us but lie to each other, which makes coordination very hard bordering on impossible.

In some ways this is a lot like Bitcoin, in that people think that with enough math and science expertise you can just reason your way out of social problems. And you can, to an extent, but not if you're fighting an organized social adversary that is collectively smarter than you. 7 billion humans is a superintelligence and it's a high bar to be smarter than that.

replies(1): >>goneho+jB2
◧◩◪◨⬒
62. c_cran+mz2[view] [source] [discussion] 2023-07-06 12:03:36
>>flagra+mD1
Currently, LLMs seem to prioritize their current goal, so if the goal is solving math puzzles or genetic problems, they would probably keep doing that too.
replies(1): >>flagra+yd4
◧◩◪◨⬒
63. c_cran+4A2[view] [source] [discussion] 2023-07-06 12:09:07
>>Footke+sB1
Why call AIs a life form? They aren't like cellular life.
replies(1): >>ilaksh+bB3
◧◩◪
64. c_cran+HA2[view] [source] [discussion] 2023-07-06 12:12:29
>>sdento+r01
The 18 year old human alignment problem has been solved pretty well. Not perfectly , but enough to justify handing out such weapons.
◧◩◪◨⬒
65. goneho+jB2[view] [source] [discussion] 2023-07-06 12:15:53
>>lukesc+Ew2
It’s worth reading about the orthogonality thesis and the underlying arguments about it.

It’s not an anti-goal that’s intentionally set, it’s that complex goal setting is hard and you may end up with something dumb that maximizes the reward unintentionally.

The issue is all of the AGIs will be unaligned in different ways because we don’t know how to align any of them. Also, the first to be able to improve itself in pursuit of its goal could take off at some threshold and then the others would not be relevant.

There’s a lot of thoughtful writing that exists on this topic and it’s really worth digging into the state of the art about it, your replies are thoughtful so it sounds like something you’d think about. I did the same thing a few years ago (around 2015) and found the arguments persuasive.

This is a decent overview: https://www.samharris.org/podcasts/making-sense-episodes/116...

replies(1): >>ben_w+Nn3
◧◩◪◨⬒⬓⬔
66. goneho+SC2[view] [source] [discussion] 2023-07-06 12:26:17
>>janals+fa2
At the limit sure there’s variance, but our shared selected history has a lot in common, something a non-human intelligence would not get for free: https://www.lesswrong.com/posts/4ARaTpNX62uaL86j6/the-hidden...

I’m also not a moral relativist, I don’t think all values are equivalent, but you don’t even need to go there - before that point a lot of what humans want is not controversial and the “obvious” cases are not so obvious or easy to classify.

◧◩◪◨⬒⬓⬔⧯▣▦
67. climat+cS2[view] [source] [discussion] 2023-07-06 13:48:08
>>gregw2+5e2
Math seems correct but US military also includes conglomerates and companies like Palantir and Anduril (main reason it is described as an industrial complex is because there is no clear distinction between corporations and how their activities are tied up with military spending and energy use).
◧◩◪◨⬒⬓
68. ben_w+Nn3[view] [source] [discussion] 2023-07-06 15:36:57
>>goneho+jB2
> the first to be able to improve itself in pursuit of its goal could take off at some threshold and then the others would not be relevant.

Thanks for reminding me that I need to properly write up why I don't think self-improvement is a huge issue.

(My thought won't fit into a comment, and I'll want to link to it later).

◧◩◪◨⬒⬓
69. ben_w+ao3[view] [source] [discussion] 2023-07-06 15:38:18
>>nights+Hw1
In fairness, corporations can still be fraudulent.
◧◩◪◨⬒⬓
70. ilaksh+bB3[view] [source] [discussion] 2023-07-06 16:28:28
>>c_cran+4A2
I think the assumption they were making was that rather than an LLM this was a type of AI that has animal-like characteristics. Which sounds fanciful but at least at a functional level you could get some main aspects just by removing guardrails from a large multimodal model and instructing it to work on its own goals, self preservation, etc. And researchers are working hard to create more lifelike systems that wouldn't necessarily be very similar to LLMs.
replies(1): >>c_cran+xF3
◧◩◪◨⬒⬓⬔
71. c_cran+xF3[view] [source] [discussion] 2023-07-06 16:42:22
>>ilaksh+bB3
The animal like systems might be interesting to observe, but it doesn't sound like they would be useful for doing much work. I am not sure where the reliance on them would come in.
◧◩◪◨⬒⬓
72. flagra+yd4[view] [source] [discussion] 2023-07-06 18:47:06
>>c_cran+mz2
I'd love to be able to see more about how the main LLMs are really trained and limited with regards to their goals and scoring algorithms.

It seems reasonable that they wouldn't deviate, but that depends on how specifically and wholly the original goals were defined. We'd basically be attempting to outwit the LLMs, I'm not sure if that's realistic or not.

◧◩◪◨⬒⬓⬔⧯▣
73. flagra+qe4[view] [source] [discussion] 2023-07-06 18:50:42
>>climat+nI1
Oh there's no denying the US military has ballooned to the size of a small to medium-sized country. That alone is a huge issue for me personally - I do agree with our country having any form of standing military but that precedent was abandoned 80 years ago.

I'm not sure how to properly compare the military of one country with the entirety of a country ~1/30th the size. On the surface it doesn't seem crazy for those to have similar budgets or resource use.

replies(1): >>climat+Du4
◧◩◪
74. skybri+nf4[view] [source] [discussion] 2023-07-06 18:54:46
>>flagra+MD1
The prioritization of individual freedoms above most other considerations (along with the assumption that it will work out better in the end) is what libertarianism is all about. Maybe you’re a libertarian without realizing it? :)
◧◩◪◨⬒⬓⬔⧯▣▦
75. climat+Du4[view] [source] [discussion] 2023-07-06 19:59:15
>>flagra+qe4
The comparison is in terms of energy use since at the end of the day that is the fundamental currency of all techno-industrial activity. The point is that the global machinery that is currently guiding civilizational progress is fundamentally anti-life. It constantly grows and subsumes whatever energy resources are accessible without any regard for negative externalities like pollution and environmental degradation. This is why I don't take AI alarmism seriously because the problem is not the AI, the problem is the organization of techno-industrial civilization and its focus on exponential growth.

It's only going to keep getting worse and the AI alarmism is not doing anything to address the actual root causes of the crisis. If anything, AI development might actually make things more sustainable by better allocating and managing natural resources so retarding AI progress is actually making things worse in the long run.

replies(1): >>flagra+wo5
◧◩◪◨⬒⬓⬔⧯▣▦▧
76. flagra+wo5[view] [source] [discussion] 2023-07-07 00:34:14
>>climat+Du4
I think those really are separate concerns that should both be given more attention.

There's a strong correlation between GDP growth and oil use, that's a huge problem and one that likely can't be solved without fundamentally revisiting modern economic models.

AI poses it's own concerns though, everything from the alignment problem to the challenge of even having to define what consciousness even is. AI development won't inherently make allocating natural resources easier - with the wrong incentive model and lack of safety rails AI could find its own solution to preserving natural resources that may not work out so well for us humans.

replies(1): >>climat+hx5
◧◩◪◨⬒⬓⬔⧯▣▦▧▨
77. climat+hx5[view] [source] [discussion] 2023-07-07 01:43:36
>>flagra+wo5
The current model is already destructive and most of the market is managed by artificial agents. Schwab will give you a roboadvisor to manage your retirement account so AI is already managing large chunks of the financial markets. Letting AI manage not just the financial aspects but things like farmland is an obvious extension of the same principle and since AIs can notice more patterns it's going to become basically a necessity because global warming is going to make large parts of existing farmlands unmanageable. Floods and droughts are becoming more common and humans are very bad at figuring out the weather so there will be an AI agent monitoring weather patterns and allocating seeds to various plots of land to maximize yields.

Bill Gates has bought up a bunch of farmland and I am certain he will use AI to manage them because manual allocation will be too inefficient[1].

1: https://www.popularmechanics.com/science/environment/a425435...

◧◩◪◨
78. chaxor+GB9[view] [source] [discussion] 2023-07-08 05:24:34
>>jgalt2+r91
What evidence do you have that allows you to make the assertion that they 'cannot drive a car'?
[go to top]