zlacker

[parent] [thread] 237 comments
1. srslac+(OP)[view] [source] 2023-05-16 12:00:15
Imagine thinking that regression based function approximators are capable of anything other than fitting the data you give it. Then imagine willfully hyping up and scaring people who don't understand, and because it can predict words you take advantage of the human tendency to anthropomorphize, so it follows that it is something capable of generalized and adaptable intelligence.

Shame on all of the people involved in this: the people in these companies, the journalists who shovel shit (hope they get replaced real soon), researchers who should know better, and dementia ridden legislators.

So utterly predictable and slimy. All of those who are so gravely concerned about "alignment" in this context, give yourselves a pat on the back for hyping up science fiction stories and enabling regulatory capture.

replies(24): >>Yajiro+g1 >>shaneb+12 >>ilrwbw+k3 >>cookie+A3 >>tgv+B5 >>Chicag+E5 >>lm2846+I5 >>kypro+o6 >>logica+77 >>sharem+Lb >>api+Yg >>chpatr+Uo >>Enrage+5r >>chaxor+zt >>ramraj+Zv >>anonym+9w >>adamsm+Mz >>bnralt+5G >>dist-e+6H >>circui+WK >>kajumi+HV >>Culona+j81 >>precom+EB1 >>johnal+b62
2. Yajiro+g1[view] [source] 2023-05-16 12:08:33
>>srslac+(OP)
Who is to say that brains aren't just regression based function approximators?
replies(4): >>shaneb+d2 >>gumbal+J3 >>lm2846+ch >>pelagi+Wr
3. shaneb+12[view] [source] 2023-05-16 12:12:16
>>srslac+(OP)
Finally, a relatable perspective.

AI/ML licensing builds Power and establishes moat. This will not lead to better software.

Frankly, Google and Microsoft are acting new. My understanding of both companies has been shattered by recent changes.

replies(1): >>isanja+I2
◧◩
4. shaneb+d2[view] [source] [discussion] 2023-05-16 12:13:37
>>Yajiro+g1
Humanity isn't stateless.
replies(1): >>chpatr+wr
◧◩
5. isanja+I2[view] [source] [discussion] 2023-05-16 12:15:56
>>shaneb+12
Did you not think they only care about money / profits ?
replies(1): >>shaneb+X8
6. ilrwbw+k3[view] [source] 2023-05-16 12:19:24
>>srslac+(OP)
Sam Altman is a great case of failing upwards. And this is the problem. You don't get to build a moral backbone if you fake your brilliance.
replies(2): >>gumbal+h4 >>cguess+PR1
7. cookie+A3[view] [source] 2023-05-16 12:20:52
>>srslac+(OP)
The real problem here is that the number of crimes you can commit with LLMs is much higher then the number of good things you can do with it. It's pretty debatable that if society were fair or reasonable with decent laws in place that LLMs training corpus shouldn't even be legal. But here we are, waiting for more billionaires to cash in.
replies(2): >>ur-wha+J6 >>esteba+67
◧◩
8. gumbal+J3[view] [source] [discussion] 2023-05-16 12:21:29
>>Yajiro+g1
My laptop emits sound as i do but it doesnt mean it can sing or talk. It’s software that does what it was programmed to, and so does ai. It may mimic the human brain but that’s about it.
replies(1): >>thesup+Fc
◧◩
9. gumbal+h4[view] [source] [discussion] 2023-05-16 12:25:01
>>ilrwbw+k3
Gives me the impression of someone who knows they are a fraud but they still do what they do hoping no one will catch on or that if the lie is big enough people will believe it. Taking such an incredible piece of tech and turning it into a fear mongering sci fi tool for milking money off of gullible people is creepy to say the least.
replies(1): >>ilrwbw+CE1
10. tgv+B5[view] [source] 2023-05-16 12:32:32
>>srslac+(OP)
I'm squarely in the "stochastic parrot" camp (I know it's not a simple markov model, but still, ChatGPT doesn't think), and it's clearly possible to interpret this as a grifting, but your argumentation is too simple.

You're leaving out the essentials. These models do more than fitting the data given. They can output it in a variety of ways, and through their approximation, can synthesize data as well. They can output things that weren't in the original data, tailored to a specific request in the tiniest of fractions of the time it would take a normal person to look up and understand that information.

Your argument is almost like saying "give me your RSA keys, because it's just two prime numbers, and I know how to list them."

replies(2): >>srslac+Tc >>adamsm+TA
11. Chicag+E5[view] [source] 2023-05-16 12:32:56
>>srslac+(OP)
Why is it so hard to hear this perspective? Like, genuinely curious. This is the first I hear of someone cogently putting this thought out there, but it seems rather painfully obvious -- even if perhaps incorrect, but certainly a perspective that is very easy to comprehend and one that merits a lot of discussion. Why is it almost nonexistent? I remember even in the hay day of crypto fever you'd still have A LOT of folks to provide counterarguments/differing perspectives, but with AI these seem to be rather extremely muted.
replies(5): >>bombca+C6 >>srslac+va >>iliane+ig >>dmreed+At >>adamsm+2B
12. lm2846+I5[view] [source] 2023-05-16 12:33:15
>>srslac+(OP)
100% this, I don't get how even on this website people are so clueless.

Give them a semi human sounding puppet and they think skynet is coming tomorrow.

If we learned anything from the past few months is how gullible people are, wishful thinking is a hell of a drug

replies(4): >>digbyb+Z6 >>dmreed+0t >>nologi+UX >>bart_s+y51
13. kypro+o6[view] [source] 2023-05-16 12:36:29
>>srslac+(OP)
Even if you're correct about the capabilities of LLMs (I don't think you are), there are still obvious dangers here.

I wrote a comment recently trying to explain how even if you believe all LLMs can (and will ever) do is regurgitate their training data that you should still be concerned.

For example, imagine in 5 years we have GPT-7, and you ask GPT-7 to solve humanity's great problems.

From its training data GPT-7 might notice that humans believe overpopulation is a serious issue facing humanity.

But its "aligned" so might understand from its training data that killing people is wrong so instead it uses its training data to seek other ways to reduce human populations without extermination.

Its training data included information about how gene drives were used by humans to reduce mosquito populations by causing infertility. Many human have also suggested (and tried) to use birth control to reduce human populations via infertility so the ethical implications of using gene drives to cause infertility is debatable based on the data the LLM was trained on.

Using this information it decides to hack into a biolab using hacking techniques it learnt from its training data and use its biochemistry knowledge to make slight alterations to one of the active research projects at the lab. This causes the lab to unknowingly produce a highly contagious bioweapon which causes infertility.

---

The point here is that even if we just assume LLMs are only capable of producing output which approximates stuff it learnt from its training data, an advanced LLM can still be dangerous.

And in this example, I'm assuming no malicious actors and an aligned AI. If you're willing to assume there might be an actor out there would seek to use LLMs for malicious reasons or the AI is not well aligned then the risk becomes even clearer.

replies(8): >>davidg+37 >>touris+w7 >>supriy+S8 >>Random+lq >>reveli+sq >>throwa+Ox >>wkat42+8C >>salmon+YN
◧◩
14. bombca+C6[view] [source] [discussion] 2023-05-16 12:37:31
>>Chicag+E5
Crypto had more direct ways to scam people so others would speak against it.

Those nonplussed by this wave of AI are just yawning.

◧◩
15. ur-wha+J6[view] [source] [discussion] 2023-05-16 12:38:29
>>cookie+A3
> The real problem here is that the number of crimes you can commit with LLMs is much higher then the number of good things you can do with it

Yeah? Did you get a crystal ball for Christmas to be able to predict what can and can't be done with a new technology?

replies(1): >>cookie+6p2
◧◩
16. digbyb+Z6[view] [source] [discussion] 2023-05-16 12:40:26
>>lm2846+I5
I’m open minded about this, I see people more knowledgeable than me on both sides of the argument. Can someone explain how Geoffrey Hinton can be considered to be clueless?
replies(4): >>Random+m9 >>srslac+r9 >>lm2846+ka >>Workac+jt
◧◩
17. davidg+37[view] [source] [discussion] 2023-05-16 12:40:42
>>kypro+o6
Sci-fi is a hell of a drug
replies(1): >>orbita+q7
◧◩
18. esteba+67[view] [source] [discussion] 2023-05-16 12:40:53
>>cookie+A3
It is literally a language calculator. It is useful for a lot more things than crimes.
19. logica+77[view] [source] 2023-05-16 12:40:56
>>srslac+(OP)
>Imagine thinking that regression based function approximators are capable of anything other than fitting the data you give it.

Are you aware that you are an 80 billion neuron biological neural network?

replies(1): >>lm2846+Ug
◧◩◪
20. orbita+q7[view] [source] [discussion] 2023-05-16 12:42:17
>>davidg+37
Shout out to his family.
◧◩
21. touris+w7[view] [source] [discussion] 2023-05-16 12:42:48
>>kypro+o6
You seems to imply sentience from this "ai".
◧◩
22. supriy+S8[view] [source] [discussion] 2023-05-16 12:49:47
>>kypro+o6
People have been able to commit malicious acts by themselves historically, no AI needed.

In other words, LLMs are only as dangerous as the humans operating them, and therefore the solution is to stop crime instead of regulating AI, which only seeks to make OpenAI a monopoly.

replies(2): >>shaneb+N9 >>kypro+X9
◧◩◪
23. shaneb+X8[view] [source] [discussion] 2023-05-16 12:50:35
>>isanja+I2
I expected them to recognize and assess risk.
◧◩◪
24. Random+m9[view] [source] [discussion] 2023-05-16 12:53:08
>>digbyb+Z6
Not clueless. However, is he an expert in socio-political-economic issues arising from AI or in non-existent AGI? Technical insight into AI might not translate into either.
replies(1): >>etiam+w21
◧◩◪
25. srslac+r9[view] [source] [discussion] 2023-05-16 12:53:48
>>digbyb+Z6
Hinton, in his own words, asked PaLM to explain a dad joke he had supposedly come up with and was so convinced that his clever and advanced joke would take a lifetime of experience to understand, despite PaLM perfectly articulating why the joke was funny, he quit Google and is, conveniently, still going to continue working on AI, despite the "risks." Not exactly the best example.
replies(1): >>digbyb+8b
◧◩◪
26. shaneb+N9[view] [source] [discussion] 2023-05-16 12:55:18
>>supriy+S8
Regulation is the only tool for minimizing crime. Other mechanisms, such as police, respond to crime after-the-fact.
replies(1): >>helloj+P51
◧◩◪
27. kypro+X9[view] [source] [discussion] 2023-05-16 12:56:35
>>supriy+S8
This isn't a trick question, genuinely curious – do you agree that guns are not the problem and should not be regulated – yes, while they can be used for harm, the right approach to gun violence is to police the crime.

I think the objection to this would be that currently not everyone in the world an expert in biochemistry or at hacking into computer systems. Even if you're correct in principal, perhaps the risks of the technology we're developing here is too high? We typically regulate technologies which can easily be used to cause harm.

replies(2): >>supriy+ti >>tome+tV
◧◩◪
28. lm2846+ka[view] [source] [discussion] 2023-05-16 12:58:23
>>digbyb+Z6
He doesn't talk about skynet afaik

> Some of the dangers of AI chatbots were “quite scary”, he told the BBC, warning they could become more intelligent than humans and could be exploited by “bad actors”. “It’s able to produce lots of text automatically so you can get lots of very effective spambots. It will allow authoritarian leaders to manipulate their electorates, things like that.”

You can do bad things with it but people who believe we're on the brink of singularity, that we're all going to lose our jobs to chatgpt and that world destruction is coming are on hard drugs.

replies(4): >>digbyb+ub >>cma+lu >>HDThor+XF >>whimsi+eG1
◧◩
29. srslac+va[view] [source] [discussion] 2023-05-16 12:58:56
>>Chicag+E5
I'm not against machine learning, I'm against regulatory capture of it. It's an amazing technology. It still doesn't change the fact that they're just function approximators that are trained to minimize loss on a dataset.
replies(1): >>luxcem+XG
◧◩◪◨
30. digbyb+8b[view] [source] [discussion] 2023-05-16 13:02:26
>>srslac+r9
Hinton said that the ability to explain a joke was among the first things that made him reassess their capabilities. Not the only thing. You make it sound as though Hinton is obviously clueless yet there are few people with deeper knowledge and more experience working with neural networks. People told him he was crazy for thinking neural networks could do anything useful, now it seems people are calling his crazy for the reverse. I’m genuinely confused about this.
replies(2): >>srslac+bo >>reveli+Wo
◧◩◪◨
31. digbyb+ub[view] [source] [discussion] 2023-05-16 13:05:22
>>lm2846+ka
I’ll have to dig it up but the last interview I saw with him, he was focused more on existential risk from the potential for super intelligence, not just misuse.
replies(1): >>tomrod+jD
32. sharem+Lb[view] [source] 2023-05-16 13:06:35
>>srslac+(OP)
While I think it needs goals to be some kind of AGI, it certainly can plan and convince people of things. Also, seems like the goal already exists maximize shareholder value. In fact if AI can beat someone at chess and figure out protein folding and figure out fusion plasma design, why is it a stretch to think it can't be good at project management. To me a scenario where it leads to an immediate reduction in the human population of some moderately large % would still be a bad outcome. So, even it you just think of it as an index of most human knowledge it does need some kind of mechanism to manage who has access to what. I don't want every to know how to make a bomb.

Is a license the best way forward I don't know but I do feel like this is more than a math formula.

replies(1): >>iliane+yh
◧◩◪
33. thesup+Fc[view] [source] [discussion] 2023-05-16 13:11:11
>>gumbal+J3
>> It’s software that does what it was programmed to, and so does ai.

That's a big part of the issue with machine learning models--they are undiscoverable. You build a model with a bunch of layers and hyperparameters, but no one really understands how it works or by extension how to "fix bugs".

If we say it "does what it was programmed to", what was it programmed to do? Here is the data that was used to train it, but how will it respond to a given input? Who knows?

That does not mean that they need to be heavily regulated. On the contrary, they need to be opened up and thoroughly "explored" before we can "entrust" them to given functions.

replies(2): >>gumbal+8A >>grumpl+812
◧◩
34. srslac+Tc[view] [source] [discussion] 2023-05-16 13:12:26
>>tgv+B5
Sure, you're right, but the simple explanation of regression is better to help people understand. What you're saying, I agree with mostly, but it changes nothing to contradict the fantasy scenario proposed by all of those who are so worried. At that point, it's just "it can be better than (some) humans at language and it can have things stacked on top to synthesize what it outputs."

Do we want to go down the road of making white collar jobs the legislatively required elevator attendants? Instead of just banning AI in general via executive agency?

That sounds like a better solution to me, actually. OpenAI's lobbyists would never go for that though. Can't have a moat that way.

◧◩
35. iliane+ig[view] [source] [discussion] 2023-05-16 13:29:16
>>Chicag+E5
> Why is it so hard to hear this perspective? Like, genuinely curious.

Because people have different definition of what intelligence is. Recreating the human brain in a computer would definitely be neat and interesting but you don't need that nor AGI to be revolutionary.

LLMs, as perfect Chinese Rooms, lack a mind or human intelligence but demonstrate increasingly sophisticated behavior. If they can perform tasks better than humans, does their lack of "understanding" and "thinking" matter?

The goal is to create a different form of intelligence, superior in ways that benefit us. Planes (or rockets!) don't "fly" like birds do but for our human needs, they are effectively much better at flying that birds ever could be.

replies(2): >>srslac+Bj >>api+1l
◧◩
36. lm2846+Ug[view] [source] [discussion] 2023-05-16 13:32:27
>>logica+77
And this is why I always hate how computer parts are named with biological terms.... a neural network's neuron doesn't share much with a human brain's neuron

Just like a CPU isn't "like your brain" and HDD "like your memories"

Absolutely nothing says our current approach is the right one to mimic a human brain

replies(4): >>logica+Qk >>iliane+Hl >>alpaca+QB >>wetpaw+5l1
37. api+Yg[view] [source] 2023-05-16 13:32:45
>>srslac+(OP)
The whole story of OpenAI is really slimy too. It was created as a non-profit, then it was handed somehow to Sam who took it closed and for-profit (using AI fear mongering as an excuse) and is now seeking to leverage government to lock it into a position of market dominance.

The whole saga makes Altman look really, really terrible.

If AI really is this dangerous then we definitely don't need people like this in control of it.

replies(2): >>nmfish+Uq >>wellth+lt1
◧◩
38. lm2846+ch[view] [source] [discussion] 2023-05-16 13:34:08
>>Yajiro+g1
The problem is that you have to bring proofs

Who's to say we're not in a simulation ? Who's to say god doesn't exist ?

replies(1): >>dmreed+tq
◧◩
39. iliane+yh[view] [source] [discussion] 2023-05-16 13:36:16
>>sharem+Lb
> I don't want every to know how to make a bomb.

This information is not created inside the LLMs, it's part of their training data. If someone is motivated enough, I'm sure they'd need no more than a few minutes of googling.

> I do feel like this is more than a math formula

The sum is greater than the parts! It can just be a math formula and still produce amazing results. After all, our brains are just a neat arrangement of atoms :)

◧◩◪◨
40. supriy+ti[view] [source] [discussion] 2023-05-16 13:40:48
>>kypro+X9
AI systems provide many benefits to society, such as image recognition, anomaly detection, educational and programming used of LLMs, to name a few.

Guns only have a primarily harmful use which is to kill or injure someone. While that act of killing may be justified when the person violates societal values in some way, making regular citizens the decision makers in whether a certain behavior is allowed or disallowed and being able to immediately make a judgment and execute upon it leads to a sort of low-trust, vigilante environment; which is why the same argument I made above doesn’t apply for guns.

replies(2): >>logicc+Vs >>menset+WV
◧◩◪
41. srslac+Bj[view] [source] [discussion] 2023-05-16 13:44:57
>>iliane+ig
That changes nothing on the hyping of science fiction "risk" of those intelligences "escaping the box" and killing us all.

The argument for regulation in that case would be because of the socio-economic risk of taking people's jobs, essentially.

So, again: pure regulatory capture.

replies(1): >>iliane+Ao
◧◩◪
42. logica+Qk[view] [source] [discussion] 2023-05-16 13:50:17
>>lm2846+Ug
>a neural network's neuron doesn't share much with a human brain's neuron

What are the key differences?

replies(1): >>wetpaw+sl1
◧◩◪
43. api+1l[view] [source] [discussion] 2023-05-16 13:51:12
>>iliane+ig
I have a chain saw that can cut better than me, a car that can go faster, a computer that can do math better, etc.

We've been doing this forever with everything. Building tools is what makes us unique. Why is building what amounts to a calculator/spreadsheet/CAD program for language somehow a Rubicon that cannot be crossed? Did people freak out this much about computers replacing humans when they were shown to be good at math?

replies(2): >>iliane+9r >>adamsm+IA
◧◩◪
44. iliane+Hl[view] [source] [discussion] 2023-05-16 13:55:00
>>lm2846+Ug
The human brain works around a lot of limiting biological functions. The necessary architecture to fully mimic a human brain on a computer might not look anything like the actual human brain.

That said, there are 8B+ of us and counting so unless there is magic involved, I don't see why we couldn't do a "1:1" replica of it (maybe far) in the future.

◧◩◪◨⬒
45. srslac+bo[view] [source] [discussion] 2023-05-16 14:07:54
>>digbyb+8b
I didn't say he was clueless, it's just not in good faith to suggest there's probable existential risk on a media tour where you're mined for quotes, and then continue to work on it.
◧◩◪◨
46. iliane+Ao[view] [source] [discussion] 2023-05-16 14:10:13
>>srslac+Bj
There's no denying this is regulatory capture by OpenAI to secure their (gigantic) bag and that the "AI will kill us all" meme is not based in reality and plays on the fact that the majority of people do not understand LLMs.

I was simply explaining why I believe your perspective is not represented in the discussions in the media, etc. If these models were not getting incredibly good at mimicking intelligence, it would not be possible to play on people's fears of it.

47. chpatr+Uo[view] [source] 2023-05-16 14:11:57
>>srslac+(OP)
Imagine thinking that NAND gates are capable of anything other than basic logic.
replies(1): >>Eisens+yu
◧◩◪◨⬒
48. reveli+Wo[view] [source] [discussion] 2023-05-16 14:11:59
>>digbyb+8b
Not clueless, but unfortunately engaging in motivated reasoning.

Google spent years doing nothing much with its AI because its employees (like Hinton) got themselves locked in an elitist hard-left purity spiral in which they convinced each other that if plebby ordinary non-Googlers could use AI they would do terrible things, like draw pictures of non-diverse people. That's why they never launched Imagen and left the whole generative art space to OpenAI, Stability and Midjourney.

Now the tech finally leaked out of their ivory tower and AI progress is no longer where he was at, but Hinton finds himself at retirement age and no longer feeling much like hard-core product development. What to do? Lucky lucky, he lives in a world where the legacy media laps up any academic with a doomsday story. So he quits and starts enjoying the life of a celebrity public intellectual, being praised as a man of superior foresight and care for the world to those awful hoi polloi shipping products and irresponsibly not voting for Biden (see the last sentence of his Wired interview). If nothing happens and the boy cried wolf then nobody will mind, it'll all be forgotten. If there's any way what happens can be twisted into interpreting reality as AI being bad though, he's suddenly the man of the hour with Presidents and Prime Ministers queuing up to ask him what to do.

It's all really quite pathetic. Academic credentials are worth nothing with respect to such claims and Hinton hasn't yet managed to articulate how, exactly, AI doom is supposed to happen. But our society doesn't penalize wrongness when it comes from such types, not even a tiny bit, so it's a cost-free move for him.

replies(1): >>digbyb+bx
◧◩
49. Random+lq[view] [source] [discussion] 2023-05-16 14:17:55
>>kypro+o6
You have a very strong hypothesis about the AI system just being able to "think up" such a bioweapon (and also the researchers being clueless in implementation). I see doomsday scenarios often assuming strong advances in sciences in the AI etc. - there is little evidence for that kind of "thinkism".
replies(2): >>HDThor+HG >>someth+RS
◧◩
50. reveli+sq[view] [source] [discussion] 2023-05-16 14:18:22
>>kypro+o6
> so instead it uses its training data to seek other ways to reduce human populations without extermination.

This is a real problem, but it's already problem with our society, not AI. Misaligned public intellectuals routinely try to reduce the human population and we don't lift a finger. Focus where the danger actually is - us!

From Scott Alexander's latest post:

Paul Ehrlich is an environmentalist leader best known for his 1968 book The Population Bomb. He helped develop ideas like sustainability, biodiversity, and ecological footprints. But he’s best known for prophecies of doom which have not come true - for example, that collapsing ecosystems would cause hundreds of millions of deaths in the 1970s, or make England “cease to exist” by the year 2000.

Population Bomb calls for a multi-pronged solution to a coming overpopulation crisis. One prong was coercive mass sterilization. Ehrlich particularly recommended this for India, a country at the forefront of rising populations.

In 1975, India had a worse-than-usual economic crisis and declared martial law. They asked the World Bank for help. The World Bank, led by Robert McNamara, made support conditional on an increase in sterilizations. India complied [...] In the end about eight million people were sterilized over the course of two years.

Luckily for Ehrlich, no one cares. He remains a professor emeritus at Stanford, and president of Stanford’s Center for Conservation Biology. He has won practically every environmental award imaginable, including from the Sierra Club, the World Wildlife Fund, and the United Nations (all > 10 years after the Indian sterilization campaign he endorsed). He won the MacArthur “Genius” Prize ($800,000) in 1990, the Crafoord Prize ($700,000, presented by the King of Sweden) that same year, and was made a Fellow of the Royal Society in 2012. He was recently interviewed on 60 Minutes about the importance of sustainability; the mass sterilization campaign never came up. He is about as honored and beloved as it’s possible for a public intellectual to get.

replies(1): >>johnti+dF
◧◩◪
51. dmreed+tq[view] [source] [discussion] 2023-05-16 14:18:24
>>lm2846+ch
You're right, of course, but that also makes your out-of-hand dismissals based on your own philosophical premises equally invalid.

Until a model of human sentience and awareness is established (note: one of the oldest problems out there alongside the movements of the stars. This is an ancient debate, still open-ended, and nothing anyone is saying in these threads is new), philosophy is all we have and ideas are debated on their merits within that space.

◧◩
52. nmfish+Uq[view] [source] [discussion] 2023-05-16 14:20:30
>>api+Yg
Open AI has been pretty dishonest since the pivot to for-profit, but this is a new low.

Incredibly scummy behaviour that will not land well with a lot of people in the AI community. I wonder if this is what prompted a lot of people to leave for Anthropic.

replies(1): >>comp_t+yH2
53. Enrage+5r[view] [source] 2023-05-16 14:21:02
>>srslac+(OP)
I'm not sure that the regulation being proposed by Altman is good, but you're vastly misstating the actual purported threat posed by AI. Altman and the senators quoted in the article aren't expressing fear that AI is becoming sentient, they are expressing the completely valid concern that AI sounds an awful lot like not-AI nowadays and will absolutely be used for nefarious purposes like spreading misinformation and committing identity crimes. The pace of development is happening way too rapidly for any meaningful conversations around these dangers to be had. Within a few years we'll have AI-generated videos that are indistinguishable from real ones, for instance, and it will be impossible for the average person to discern if they're watching something real or not.
◧◩◪◨
54. iliane+9r[view] [source] [discussion] 2023-05-16 14:21:24
>>api+1l
> Why is building what amounts to a calculator/spreadsheet/CAD program for language somehow a Rubicon that cannot be crossed?

We've already crossed it and I believe we should go full steam ahead, tech is cool and we should be doing cool things.

> Did people freak out this much about computers replacing humans when they were shown to be good at math?

Too young but I'm sure they did freak out a little! Computers have changed the world and people have internalized computers as being much better/faster at math but exhibiting creativity, language proficiency and thinking is not something people thought computers were supposed to do.

◧◩◪
55. chpatr+wr[view] [source] [discussion] 2023-05-16 14:23:24
>>shaneb+d2
Neither is text generation as you continue generating text.
replies(1): >>shaneb+Ru
◧◩
56. pelagi+Wr[view] [source] [discussion] 2023-05-16 14:25:18
>>Yajiro+g1
A Boltzmann brain just materialized over my house.
replies(1): >>dpflan+vF
◧◩◪◨⬒
57. logicc+Vs[view] [source] [discussion] 2023-05-16 14:30:11
>>supriy+ti
>whether a certain behavior is allowed or disallowed and being able to immediately make a judgment and execute upon it leads to a sort of low-trust, vigilante environment

Have you any empirical evidence at all on this? From what I've seen the open carry states in the US are generally higher trust environments (as was the US in past when more people carried). People feel safer when they know somebody can't just assault, rob or rape them without them being able to do anything to defend themselves. Is the Tenderloin a high trust environment?

◧◩
58. dmreed+0t[view] [source] [discussion] 2023-05-16 14:30:30
>>lm2846+I5
I don't think anyone reasonable believes LLMs are right now skynet, nor that they will be tomorrow.

What I feel has changed, and what drives a lot of the fear and anxiety you see, is a sudden perception of possibility, of accessibility.

A lot of us (read: people) are implicit dualists, even if we say otherwise. It seems to be a sticky bias in the human mind (see: the vanishing problem of AI). Indeed, you can see a whole lot of dualism in this thread!

And even if you don't believe that LLMs themselves are "intelligent" (by whatever metric you define that to be...), you can still experience an exposing and unseating of some of the foundations of that dualism.

LLMs may not be a destination, but their unprecedented capabilities open up the potential for a road to something much more humanlike in ways that perhaps did not feel possible before, or at least not possible any time soon.

They are powerful enough to change the priors of one's internal understanding of what can be done and how quickly. Which is an uncomfortable process for those of us experiencing it.

replies(1): >>whimsi+3G1
◧◩◪
59. Workac+jt[view] [source] [discussion] 2023-05-16 14:32:06
>>digbyb+Z6
Given the skill AI has with programming showing up about 10 years sooner than anyone expected, I have seen a lot of cope in tech circles.

No one yet knows how this is going to go, coping might turn into "See! I knew all along!" if progress fizzles out. But right now the threat is very real and we're seeing the full spectrum of "humans under threat" behavior. Very similar to the early pandemic when you could find smart people with any take you wanted.

60. chaxor+zt[view] [source] 2023-05-16 14:33:08
>>srslac+(OP)
What do you think about the papers showing mathematical proofs that GNNs (i.e. GATs/transformers) are dynamic programmers and therefore perform algorithmic reasoning?

The fact that these systems can extrapolate well beyond their training data by learning algorithms is quite different than what has come before, and anyone stating that they "simply" predict next token is severely shortsighted. Things don't have to be 'brain-like' to be useful, or to have capabilities of reasoning, but we have evidence that these systems have aligned well with reasoning tasks, perform well at causal reasoning, and we also have mathematical proofs that show how.

So I don't understand your sentiment.

replies(6): >>agento+Mu >>uh_uh+qw >>rdedev+lx >>pdonis+ED >>felipe+ZS >>joaogu+D91
◧◩
61. dmreed+At[view] [source] [discussion] 2023-05-16 14:33:08
>>Chicag+E5
Because it reads as relatively naive and a pretty old horse in the debate of sentience

I'm all for villainizing the figureheads of the current generation of this movement. The politics of this sea-change are fascinating and worthy of discussion.

But out-of-hand dismissal of what has been accomplished smacks more to me of lack of awareness of the history of the study of the brain, cognition, language, and computers, than it does of a sound debate position.

◧◩◪◨
62. cma+lu[view] [source] [discussion] 2023-05-16 14:37:26
>>lm2846+ka
> You can do bad things with it but people who believe we're on the brink of singularity, that we're all going to lose our jobs to chatgpt and that world destruction is coming are on hard drugs.

Geoff Hinton, Stuart Russell, Jürgen Schmidhuber and Demis Hassabis all talk about something singularity-like as fairly near term, and all have concerns with ruin, though not all think it is the most likely outcome.

That's the backprop guy, top AI textbook guy, co-inventor of LSTMs (only thing that worked well for sequences before transformers)/highwaynets-resnets/arguably GANs, and the founder of DeepMind.

Schmidhuber (for context, he was talking near term, next few decades):

> All attempts at making sure there will be only provably friendly AIs seem doomed. Once somebody posts the recipe for practically feasible self-improving Goedel machines or AIs in form of code into which one can plug arbitrary utility functions, many users will equip such AIs with many different goals, often at least partially conflicting with those of humans. The laws of physics and the availability of physical resources will eventually determine which utility functions will help their AIs more than others to multiply and become dominant in competition with AIs driven by different utility functions. Which values are "good"? The survivors will define this in hindsight, since only survivors promote their values.

Hassasbis:

> We are approaching an absolutely critical moment in human history. That might sound a bit grand, but I really don't think that is overstating where we are. I think it could be an incredible moment, but it's also a risky moment in human history. My advice would be I think we should not "move fast and break things." [...] Depending on how powerful the technology is, you know it may not be possible to fix that afterwards.

Hinton:

> Well, here’s a subgoal that almost always helps in biology: get more energy. So the first thing that could happen is these robots are going to say, ‘Let’s get more power. Let’s reroute all the electricity to my chips.’ Another great subgoal would be to make more copies of yourself. Does that sound good?

Russell:

“Intelligence really means the power to shape the world in your interests, and if you create systems that are more intelligent than humans either individually or collectively then you’re creating entities that are more powerful than us,” said Russell at the lecture organized by the CITRIS Research Exchange and Berkeley AI Research Lab. “How do we retain power over entities more powerful than us, forever?”

“If we pursue [our current approach], then we will eventually lose control over the machines. But, we can take a different route that actually leads to AI systems that are beneficial to humans,” said Russell. “We could, in fact, have a better civilization.”

replies(2): >>tomrod+WD >>tome+vO
◧◩
63. Eisens+yu[view] [source] [discussion] 2023-05-16 14:38:21
>>chpatr+Uo
1. Explain why it is not possible for an incredibly large number of properly constructed NAND gates to think

2. Explain why it is possible for a large number of properly constructed neurons to think.

replies(1): >>HDThor+hH
◧◩
64. agento+Mu[view] [source] [discussion] 2023-05-16 14:39:23
>>chaxor+zt
Give me a break. Very interesting theoretical work and all, but show me where it's actually being used to do anything of value, beyond publication fodder. You could also say MLPs are proved to be universal approximators, and can therefore model any function, including the one that maps sensory inputs to cognition. But the disconnect between this theory and reality is so great that it's a moot point. No one uses MLPs this way for a reason. No one uses GATs in systems that people are discussing right now either. GATs rarely even beat GCNs by any significant margin in graph benchmarks.
replies(1): >>chaxor+Tz
◧◩◪◨
65. shaneb+Ru[view] [source] [discussion] 2023-05-16 14:39:31
>>chpatr+wr
"Neither is text generation as you continue generating text."

LLM is stateless.

replies(1): >>chpatr+Pw
66. ramraj+Zv[view] [source] 2023-05-16 14:45:01
>>srslac+(OP)
Imagine being supposedly at the forefront of AI or engineering and being the last people (if ever) to concede simple concepts could materialize complex intelligence. Even the publicly released version of this thing is doing insane tasks, passes any meaningful version of a Turing test, reasons it's way into nearly every professional certification exam out there, and you're still insisting its not smart or worrying because what again? Your math ability or disdain for an individual?
replies(1): >>jazzyj+ky
67. anonym+9w[view] [source] 2023-05-16 14:45:40
>>srslac+(OP)
This also explains the 'recent advancements' best use cases - parsers. "Translate this from python to js or this struct to that json."
◧◩
68. uh_uh+qw[view] [source] [discussion] 2023-05-16 14:46:47
>>chaxor+zt
I just don't get how the average HN commenter thinks (and gets upvoted) that they know better than e.g. Ilya Sutskever who actually, you know, built the system. I keep reading this "it just predicts words, duh" rhetoric on HN which is not at all believed by people like Ilya or Hinton. Could it be that HN commenters know better than these people?
replies(5): >>dmreed+bA >>Random+YF >>shafyy+4G >>hervat+kH >>agento+Dm1
◧◩◪◨⬒
69. chpatr+Pw[view] [source] [discussion] 2023-05-16 14:48:08
>>shaneb+Ru
On a very fundamental level the LLM is a function from context to the next token but when you generate text there is a state as the context gets updated with what has been generated so far.
replies(2): >>shaneb+xy >>jazzyj+nA
◧◩◪◨⬒⬓
70. digbyb+bx[view] [source] [discussion] 2023-05-16 14:49:56
>>reveli+Wo
I actually do hope you're right. I've been looking forward to an AI future my whole life and would prefer to not now be worrying about existential risk. It reminds me of when people started talking about how the LHC might create a blackhole and swallow the earth. But I have more confidence in the theories that convinced people it was nearly impossible to occur than what we're seeing now.

Everyone engages in motivated reasoning. The psychoanalysis you provide for Hinton could easily be spun in the opposite direction: a man who spent his entire adult life and will go down in history as "the godfather of" neural networks surely would prefer for that to have been a good thing. Which would then give him even more credibility. But these are just stories we tell about people. It's the arguments we should be focused on.

I don't think "how AI doom is supposed to happen" is all that big of a mystery. The question is simply: "is an intelligence explosion possible"? If the answer is no, then OK, let's move on. If the answer is "maybe", then all the chatter about AI alignment and safety should be taken seriously, because it's very difficult to know how safe a super intelligence would be.

replies(1): >>reveli+JR
◧◩
71. rdedev+lx[view] [source] [discussion] 2023-05-16 14:51:03
>>chaxor+zt
To be fair LLMs are predicting the next token. It's just that to get better and better predictions it needs to understand some level of reasoning and math. However it feels to me that a lot of this reasoning is brute forced from the training data. Like chatgpt gets some things wrong when adding two very large numbers. If it really knew the algorithm for adding two numbers it shouldn't be making them in the first place. I guess same goes for issues like hallucinations. We can keep pushing the envelope using this technique but I'm sure we will hit a limit somewhere
replies(5): >>chaxor+Yy >>agentu+Iz >>zootre+1E >>uh_uh+tF >>visarg+QJ
◧◩
72. throwa+Ox[view] [source] [discussion] 2023-05-16 14:53:04
>>kypro+o6
To be fair to the AI, overpopulation or rather overconsumption is a problem for humanity. If people think we can consume at current rates and have the resources to maintain our current standard of living (at least in a western sense) for even a hundred years, they’re delusional.
◧◩
73. jazzyj+ky[view] [source] [discussion] 2023-05-16 14:55:22
>>ramraj+Zv
your comment reads to me as totally disconnected to the OP, whose concern relates to using the appearance of intelligence as a scare tactic to build a regulatory moat.
replies(1): >>adamsm+qB
◧◩◪◨⬒⬓
74. shaneb+xy[view] [source] [discussion] 2023-05-16 14:56:46
>>chpatr+Pw
"On a very fundamental level the LLM is a function from context to the next token but when you generate text there is a state as the context gets updated with what has been generated so far."

Its output is predicated upon its training data, not user defined prompts.

replies(2): >>chpatr+Rz >>alpaca+DE
◧◩◪
75. chaxor+Yy[view] [source] [discussion] 2023-05-16 14:58:48
>>rdedev+lx
Of course it predict the next token. Every single person on earth knows that so it's not worth repeating at all.

As for the fact that it gets things wrong sometimes - sure, this doesn't say it actually learned every algorithm (in whichever model you may be thinking about). But the nice thing is that we now have this proof via category theory, and it allows us to both frame and understand what has occurred, and to consider how to align the systems to learn algorithms better.

replies(2): >>rdedev+6B >>glitch+jC
◧◩◪
76. agentu+Iz[view] [source] [discussion] 2023-05-16 15:02:10
>>rdedev+lx
And LLMs will never be able to reason about mathematical objects and proofs. You cannot learn the truth of a statement by reading more tokens.

A system that can will probably adopt a different acronym (and gosh that will be an exciting development... I look forward to the day when we can dispatch trivial proofs to be formalized by a machine learning algorithm so that we can focus on the interesting parts while still having the entire proof formalized).

replies(1): >>chaxor+zA
77. adamsm+Mz[view] [source] 2023-05-16 15:02:35
>>srslac+(OP)
"It's just a stochastic parrot" is one of the dumbest takes on LLM's of all time.
replies(2): >>microm+GH >>srslac+Xs2
◧◩◪◨⬒⬓⬔
78. chpatr+Rz[view] [source] [discussion] 2023-05-16 15:03:23
>>shaneb+xy
If you have some data and continuously update it with a function, we usually call that data state. That's what happens when you keep adding tokens to the output. The "story so far" is the state of an LLM-based AI.
replies(1): >>shaneb+PB
◧◩◪
79. chaxor+Tz[view] [source] [discussion] 2023-05-16 15:03:49
>>agento+Mu
Are you saying that the new mathematical theorems that were proven using GNNs from Deepmind were not useful?

There were two very noteworthy (Perhaps Nobel prize level?) breakthroughs in two completely different fields of mathematics (knot theory and representation theory) by using these systems.

I would certainly not call that "useless", even if they're not quite Nobel-prize-worthy.

Also, "No one uses GATs in systems people discuss right now" ... Transformerare GATs (with PE) ... So, you're incredibly wrong.

replies(1): >>agento+kF
◧◩◪◨
80. gumbal+8A[view] [source] [discussion] 2023-05-16 15:04:53
>>thesup+Fc
AI models are not just input and output data. The mathematics in between are designed to mimic intelligence. There is no magic, no supra natural force, no real intelligence involved. It does what it was designed to do. Many dont know how computers work, while some in the past thought cars and engines were the devil. There’s no point in trying to exploit such folks in order to promote a product. We arent meant to know exactly what it will output because that’s what it was programmed to do.
replies(1): >>shaneb+4D
◧◩◪
81. dmreed+bA[view] [source] [discussion] 2023-05-16 15:05:11
>>uh_uh+qw
I am reminded of the Mitchell and Webb "Evil Vicars" sketch.

"So, you've thought about eternity for an afternoon, and think you've come to some interesting conclusions?"

◧◩◪◨⬒⬓
82. jazzyj+nA[view] [source] [discussion] 2023-05-16 15:06:01
>>chpatr+Pw
the model is not effected by its inputs over time

its essentially a function that is called recursively on its result, no need to represent state

replies(1): >>chpatr+8F
◧◩◪◨
83. chaxor+zA[view] [source] [discussion] 2023-05-16 15:06:41
>>agentu+Iz
You should read some of the papers referred to in the above comments before making that assertion. It may take a while to realize the overall structure of the argument, how the category theory is used, and how this is directly applicable to LLMs, but if you are in ML it should be obvious. https://arxiv.org/abs/2203.15544
replies(1): >>agentu+5L
◧◩◪◨
84. adamsm+IA[view] [source] [discussion] 2023-05-16 15:07:52
>>api+1l
You've never had a tool that is potentially better than you or better than all humans at all tasks. If you can't see why that is different then idk what to say.
replies(2): >>api+CD >>freedo+BF
◧◩
85. adamsm+TA[view] [source] [discussion] 2023-05-16 15:09:03
>>tgv+B5
Please explain how Stochastic Parrots can perform chain of reasoning and answer out of distribution questions from exams like the GRE or Bar.
replies(1): >>srslac+ar2
◧◩
86. adamsm+2B[view] [source] [discussion] 2023-05-16 15:09:40
>>Chicag+E5
>Why is it so hard to hear this perspective?

Because it's wrong and smart people know that.

◧◩◪◨
87. rdedev+6B[view] [source] [discussion] 2023-05-16 15:09:46
>>chaxor+Yy
The fact that it sometimes fails simple algorithms for large numbers but shows good performance in other complex algorithms with simple inputs seems to me that something on a fundamental level is still insufficient
replies(2): >>zamnos+WG >>starlu+1U
◧◩◪
88. adamsm+qB[view] [source] [discussion] 2023-05-16 15:11:06
>>jazzyj+ky
Actually OP is clearly, ironically, parroting the stochastic parrot idea that LLMs are incapable of anything other than basic token prediction and dismissing any of their other emergent abilities.
replies(3): >>woeiru+hK >>jazzyj+i81 >>srslac+Nr2
◧◩◪◨⬒⬓⬔⧯
89. shaneb+PB[view] [source] [discussion] 2023-05-16 15:12:28
>>chpatr+Rz
'If you have some data and continuously update it with a function, we usually call that data state. That's what happens when you keep adding tokens to the output. The "story so far" is the state of an LLM-based AI.'

You're conflating UX and LLM.

replies(2): >>chpatr+RE >>danena+I21
◧◩◪
90. alpaca+QB[view] [source] [discussion] 2023-05-16 15:12:38
>>lm2846+Ug
> a neural network's neuron doesn't share much with a human brain's neuron

True, it's just binary logic gates, but it's a lot of them and if they can simulate pretty much anything why should intelligence be magically exempt?

> Absolutely nothing says our current approach is the right one to mimic a human brain

Just like nothing says it's the wrong one. I don't think those regulation suggestions are a good idea at all (and say a lot about a company called OpenAI), but that doesn't mean we should treat it like the NFT hype.

◧◩
91. wkat42+8C[view] [source] [discussion] 2023-05-16 15:13:56
>>kypro+o6
> This causes the lab to unknowingly produce a highly contagious bioweapon which causes infertility.

I don't think this would be a bad thing :) Some people will always be immune, humanity wouldn't die out. And it would be a humane way for gradual population reduction. It would create some temporary problems with elderly care (like what China is facing now) but will make long term human prosperity much more likely. We just can't keep growing against limited resources.

The Dan Brown book Inferno had a similar premise and I was disappointed they changed the ending in the movie so that it didn't happen.

◧◩◪◨
92. glitch+jC[view] [source] [discussion] 2023-05-16 15:14:51
>>chaxor+Yy
> Of course it predict the next token. Every single person on earth knows that so it's not worth repeating at all

What's a token?

replies(1): >>visarg+fK
◧◩◪◨⬒
93. shaneb+4D[view] [source] [discussion] 2023-05-16 15:18:46
>>gumbal+8A
"We arent meant to know exactly what it will output because that’s what it was programmed to do."

Incorrect, we can't predict its output because we cannot look inside. That's a limitation, not a feature.

◧◩◪◨⬒
94. tomrod+jD[view] [source] [discussion] 2023-05-16 15:19:43
>>digbyb+ub
The NYT piece implied that, but no, his concern was less existential singularity and more on immoral use.
replies(1): >>cma+XK1
◧◩◪◨⬒
95. api+CD[view] [source] [discussion] 2023-05-16 15:21:03
>>adamsm+IA
LLMs are better than me at rapidly querying a vast bank of language-encoded knowledge and synthesizing it in the form of an answer to or continuation of a prompt... in the same way that Mathematica is vastly better than me at doing the mechanics of math and simplifying complex functions. We build tools to amplify our agency.

LLMs are not sentient. They have no agency. They do nothing a human doesn't tell them to do.

We may create actual sentient independent AI someday. Maybe we're getting closer. But not only is this not it, but I fail to see how trying to license it will prevent that from happening.

replies(2): >>iliane+KM >>flango+pA1
◧◩
96. pdonis+ED[view] [source] [discussion] 2023-05-16 15:21:13
>>chaxor+zt
> What do you think about the papers showing mathematical proofs that GNNs (i.e. GATs/transformers) are dynamic programmers and therefore perform algorithmic reasoning?

Do you have a reference?

◧◩◪◨⬒
97. tomrod+WD[view] [source] [discussion] 2023-05-16 15:22:19
>>cma+lu
With due respect, the inventors of a thing rarely turn into the innovators or implementers of a thing.

Should we be concerned about networked, hypersensing AI with bad code? Yes.

Is that an existential threat? Not so long as we remember that there are off switches.

Should we be concerned about kafkaesqe hellscapes of spam and bad UX? Yes.

Is that an existential threat? Sort of, if we ceded all authority to an algorithm without a human in the loop with the power to turn it off.

There is a theme here.

replies(6): >>woeiru+2K >>cma+cK >>digbyb+yL >>olddus+Rf1 >>Number+bL1 >>DirkH+7X1
◧◩◪
98. zootre+1E[view] [source] [discussion] 2023-05-16 15:22:35
>>rdedev+lx
You know the algorithm for arithmetic. Are you telling me you could sum any large numbers first attempt, without any working and in less than a second 100% of the time?
replies(2): >>jmcgee+qF >>joaogu+491
◧◩◪◨⬒⬓⬔
99. alpaca+DE[view] [source] [discussion] 2023-05-16 15:25:21
>>shaneb+xy
> Its output is predicated upon its training data, not user defined prompts.

Prompts very obviously have influence on the output.

replies(1): >>shaneb+IL
◧◩◪◨⬒⬓⬔⧯▣
100. chpatr+RE[view] [source] [discussion] 2023-05-16 15:25:58
>>shaneb+PB
I never said LLMs are stateful.
◧◩◪◨⬒⬓⬔
101. chpatr+8F[view] [source] [discussion] 2023-05-16 15:27:13
>>jazzyj+nA
Being called recursively on a result is state.
replies(1): >>jazzyj+181
◧◩◪
102. johnti+dF[view] [source] [discussion] 2023-05-16 15:27:33
>>reveli+sq
Wow, what a turd. Reminds me of James Watson
◧◩◪◨
103. agento+kF[view] [source] [discussion] 2023-05-16 15:27:51
>>chaxor+Tz
You’re drinking from the academic marketing koolaid. Please tell me: where are these methods being applied in AI systems today?

And I’m so tired of this “transformers are just GNNs” nonsense that Petar has been pushing (who happens to have invented GATs and has a vested interest in overstating their importance). Transformers are GNNs in only the most trivial way: if you make the graph fully connected and allow everything to interact with everything else. I.e., not really a graph problem. Not to mention that the use of positional encodings breaks the very symmetry that GNNs were designed to preserve. In practice, no one is using GNN tooling to build transformers. You don’t see PyTorch geometric or DGL in any of the code bases. In fact, you see the opposite: people exploring transformers to replace GNNs in graph problems and getting SOTA results.

It reminds me of people that are into Bayesian methods always swooping in after some method has success and saying, “yes, but this is just a special case of a Bayesian method we’ve been talking about all along!” Yes, sure, but GATs have had 6 years to move the needle, and they’re no where to be found within modern AI systems that this thread is about.

◧◩◪◨
104. jmcgee+qF[view] [source] [discussion] 2023-05-16 15:28:15
>>zootre+1E
I could with access to a computer
replies(1): >>starlu+xU
◧◩◪
105. uh_uh+tF[view] [source] [discussion] 2023-05-16 15:28:57
>>rdedev+lx
Both of these statements can be true:

1. ChatGPT knows the algorithm for adding two numbers of arbitrary magnitude.

2. It often fails to use the algorithm in point 1 and hallucinates the result.

Knowing something doesn't mean it will get it right all the time. Rather, an LLM is almost guaranteed to mess up some of the time due to the probabilistic nature of its sampling. But this alone doesn't prove that it only brute-forced task X.

◧◩◪
106. dpflan+vF[view] [source] [discussion] 2023-05-16 15:29:12
>>pelagi+Wr
An entire generation of minds, here and gone in an instant.
◧◩◪◨⬒
107. freedo+BF[view] [source] [discussion] 2023-05-16 15:29:37
>>adamsm+IA
> or better than all humans at all tasks.

I work in tech too and don't want to lose my job and have to go back to blue collar work, but there's a lot of blue collar workers who would find that a pretty ridiculous statement and there is plenty of demand for that work these days.

◧◩◪◨
108. HDThor+XF[view] [source] [discussion] 2023-05-16 15:30:48
>>lm2846+ka
He absolutely does. The interview I saw with him on the PBS Newshour was 80% him talking about the singularity and extinction risk. The interviewer asked him about more near term risk and he basically said he wasn't as worried as he was about a skynet type situation.
◧◩◪
109. Random+YF[view] [source] [discussion] 2023-05-16 15:30:55
>>uh_uh+qw
That is the wrong discussion. What are their regulatory, social, or economic policy credentials?
replies(1): >>uh_uh+MX
◧◩◪
110. shafyy+4G[view] [source] [discussion] 2023-05-16 15:31:24
>>uh_uh+qw
The thing is, experts like Ilya Sutskever are so deep in that shit that they are heavily biased (from a tech and social/economic) perspective. Furthermore, many experts are wrong all the time.

I don't think the average HN commenter claims to be better at building these system than an expert. But to criticize, especially critic on economic, social, and political levels, one doesn't need to be an expert on LLMs.

And finally, what the motivation of people like Sam Altman and Elon Musk is should be clear to everbody with a half a brain by now.

replies(2): >>uh_uh+bJ >>Number+sr1
111. bnralt+5G[view] [source] 2023-05-16 15:31:25
>>srslac+(OP)
What’s funny is that a lot of people in that crowd lambastes the fear mongering of anti-GMO or anti-nuclear folk, but then they turn around and do the exact same thing for tech that their group likes to fear monger about.
◧◩◪
112. HDThor+HG[view] [source] [discussion] 2023-05-16 15:33:28
>>Random+lq
Humanity has already created bioweapons. The AI just needs to find the paper that describes them.
◧◩◪◨⬒
113. zamnos+WG[view] [source] [discussion] 2023-05-16 15:34:32
>>rdedev+6B
Insufficient for what? Humans regularly fail simple algorithms for small numbers, nevermind large numbers and complex algorithms
◧◩◪
114. luxcem+XG[view] [source] [discussion] 2023-05-16 15:34:33
>>srslac+va
> It still doesn't change the fact that they're just function approximators that are trained to minimize loss on a dataset.

That fact does not entail what theses models can or cannot do. For what we know our brain could be a process that minimize an unknown loss function.

But more importantly, what SOTA is now does not predict what it will be in the future. What we know is that there is rapid progress in that domain. Intelligence explosion could be real or not, but it's foolish to ignore its consequences because current AI models are not that clever yet.

replies(1): >>tome+GU
115. dist-e+6H[view] [source] 2023-05-16 15:34:57
>>srslac+(OP)
Imagine thinking that a bunch of molecules grouped in a pattern are capable of anything but participating in chemical reactions.
◧◩◪
116. HDThor+hH[view] [source] [discussion] 2023-05-16 15:35:35
>>Eisens+yu
3. Explain the hard problem of consciousness.

Just because we don't understand how thinking works doesn't mean it doesn't work. LLMs have already shown the ability to use logic.

replies(1): >>grumpl+l22
◧◩◪
117. hervat+kH[view] [source] [discussion] 2023-05-16 15:35:43
>>uh_uh+qw
No one is claiming to know better than Ilya. Just recognition of the fact that such a license would benefit these same individuals (or their employers) the most. I don't understand how HN can be so angry about a company that benefits from tax law (Intuit) advocating for regulation while also supporting a company that would benefit from an AI license (OpenAI) advocating for such regulation. The conflict of interest isn't even subtle. To your point, why isn't Ilya addressing the committee?
replies(1): >>uh_uh+rL
◧◩
118. microm+GH[view] [source] [discussion] 2023-05-16 15:36:59
>>adamsm+Mz
What I don't understand about the dismissals is that a "stochastic parrot" is a big deal in its own right — it's not like we've been living in a world with abundant and competent stochastic parrots, this is very obviously a new and different thing. We have entire industries and professions that are essentially stochastic parrotry.
◧◩◪◨
119. uh_uh+bJ[view] [source] [discussion] 2023-05-16 15:42:49
>>shafyy+4G
srslack above was making technical claims why LLMs can't be "generalized and adaptable intelligence". To make such statements, it surely helps if you are a technical expert at building LLMs.
◧◩◪
120. visarg+QJ[view] [source] [discussion] 2023-05-16 15:45:00
>>rdedev+lx
> If it really knew the algorithm for adding two numbers it shouldn't be making them in the first place.

You're using it wrong. If you asked a human to do the same operation in under 2 seconds without paper, would the human be more accurate?

On the other hand if you ask for a step by step execution, the LLM can solve it.

replies(3): >>catchn+x01 >>teduna+J71 >>ipaddr+yv1
◧◩◪◨⬒⬓
121. woeiru+2K[view] [source] [discussion] 2023-05-16 15:45:52
>>tomrod+WD
Did you even watch the Terminator series? I think scifi has been very adept at demonstrating how physical disconnects/failsafes are unlikely to work with super AIs.
◧◩◪◨⬒⬓
122. cma+cK[view] [source] [discussion] 2023-05-16 15:46:23
>>tomrod+WD
> Is that an existential threat? Not so long as we remember that there are off switches.

Remember there are off switches for human existence too, like whatever biological virus a super intelligence could engineer.

An off-switch for a self-improving AI isn't as trivial as you make it sound if it gets to anything like in those quotes, and even then you are assuming the human running it isn't malicious. We assume some level of sanity at least with the people in charge of nuclear weapons, but it isn't clear that AI will have the same large state actor barrier to entry or the same perception of mutually assured destruction if the actor were to use it against a rival.

replies(1): >>tomrod+LB1
◧◩◪◨⬒
123. visarg+fK[view] [source] [discussion] 2023-05-16 15:46:33
>>glitch+jC
A token is either a common word or a common enough word fragment. Rare words are expressed as multiple tokens, while frequent words as a single token. They form a vocabulary of 50k up to 250k. It is possible to write any word or text in a combination of tokens. In the worst case 1 token can be 1 char, say, when encoding a random sequence.

Tokens exist because transformers don't work on bytes or words. This is because it would be too slow (bytes), the vocabulary too large (words), and some words would appear too rarely or never. The token system allows a small set of symbols to encode any input. On average you can approximate 1 token = 1 word, or 1 token = 4 chars.

So tokens are the data type of input and output, and the unit of measure for billing and context size for LLMs.

◧◩◪◨
124. woeiru+hK[view] [source] [discussion] 2023-05-16 15:46:38
>>adamsm+qB
Spoiler alert: they're actually both LLMs arguing with one another.
125. circui+WK[view] [source] 2023-05-16 15:49:10
>>srslac+(OP)
Generating new data similar to what’s in a training set set isn’t the only type of AI that exists, you can also optimise a different goal, like board game playing AIs that are vastly better than humans because they aren’t trained on human moves. This is also how ChatGPT is more polite than the data it’s trained on, and there’s no reason to think that given sufficient compute power it couldn’t be more intelligent too, like board game AIs are at the specific task of playing board games.

And just because a topic has been covered by science fiction doesn’t mean it can’t happen, the sci-fi depictions will be unrealistic though because they’re meant to be dramatic rather than realistic

◧◩◪◨⬒
126. agentu+5L[view] [source] [discussion] 2023-05-16 15:49:57
>>chaxor+zA
There are methods of proof that I'm not sure dynamic programming is fit to solve but this is an interesting paper. However even if it can only solve particular induction proofs that would be a big help. Thanks for sharing.
◧◩◪◨
127. uh_uh+rL[view] [source] [discussion] 2023-05-16 15:51:48
>>hervat+kH
2 reasons:

1. He's too busy building the next generation of tech that HN commenters will be arguing about in a couple months' time.

2. I think Sam Altman (who is addressing the committee) and Ilya are pretty much on the same page on what LLMs do.

◧◩◪◨⬒⬓
128. digbyb+yL[view] [source] [discussion] 2023-05-16 15:52:27
>>tomrod+WD
There are multiple risks that people talk about, the most interesting is the intelligence explosion. In that scenario we end up with a super intelligence. I don’t feel confident in my ability to asses the likelihood of that happening, but assuming it is possible, thinking through the consequences is a very interesting exercise. Imagining the capabilities of an alien super intelligence is like trying to imagine a 4th spatial dimension. It can only be approached with analogies. Can it be “switched off”. Maybe not, if it was motivated to prevent itself from being switched off. My dog seems to think she can control my behavior in various predictable ways, like sitting or putting her paw on my leg, and sometimes it works. But if I have other things I care about in that moment, things that she is completely incapable of understanding, then who is actually in control becomes very obvious.
◧◩◪◨⬒⬓⬔⧯
129. shaneb+IL[view] [source] [discussion] 2023-05-16 15:52:42
>>alpaca+DE
"Prompts very obviously have influence on the output."

The LLM is also discrete.

◧◩◪◨⬒⬓
130. iliane+KM[view] [source] [discussion] 2023-05-16 15:56:42
>>api+CD
I don't think we need sentient AI for it to be autonomous. LLMs are powerful cognitive engines and weak knowledge engines. Cognition on its own does not allow them to be autonomous, but because they can use tools (APIs, etc.) they are able to have some degree of autonomy when given a task and can use basic logic to follow them through/correct their mistakes.

AutoGPTs and the likes are much overhyped (it's early tech experiments after all) and have not produced anything of value yet but having dabbled with autonomous agents, I definitely see a not so distant future when you can outsource valuable tasks to such systems.

◧◩
131. salmon+YN[view] [source] [discussion] 2023-05-16 16:00:47
>>kypro+o6
> From its training data GPT-7 might notice

> But its "aligned" so might understand

> Using this information it decides to hack

I think you're anthropomorphizing LLM's too much here. If we assume that there's a AGI-esque AI, then of course we should be worried about an AGI-esque AI. But I see no reason to think that's the case.

replies(1): >>HDThor+WZ1
◧◩◪◨⬒
132. tome+vO[view] [source] [discussion] 2023-05-16 16:02:26
>>cma+lu
How can one distinguish this testimony from rhetoric by a group who want to big themselves up and make grandiose claims about their accomplishments?
replies(1): >>digbyb+aQ
◧◩◪◨⬒⬓
133. digbyb+aQ[view] [source] [discussion] 2023-05-16 16:07:46
>>tome+vO
You can also ask that question about the other side. I suppose we need to look closely at the arguments. I think we’re in a situation where we as a species don’t know the answer to this question. We go on the internet looking for an answer but some questions don’t yet have a definitive answer. So all we can do is follow the debate.
replies(2): >>tome+4U >>tome+Bu1
◧◩◪◨⬒⬓⬔
134. reveli+JR[view] [source] [discussion] 2023-05-16 16:14:07
>>digbyb+bx
> surely would prefer for that to have been a good thing. Which would then give him even more credibility

Why? Both directions would be motivated reasoning without credibility. Credibility comes from plausible articulations of how such an outcome would be likely to happen, which is lacking here. An "intelligence explosion" isn't something plausible or concrete that can be debated, it's essentially a religious concept.

replies(1): >>digbyb+gb1
◧◩◪
135. someth+RS[view] [source] [discussion] 2023-05-16 16:18:58
>>Random+lq
The whole "LLMs are not just a fancy auto-complete" argument is based on the fact that they seem to be doing stuff beyond what they are explicitly programmed to do or were expected to do. Even at the current infant scale there doesn't seem to be an efficient way of detecting these emergent properties. Moreover, the fact that you don't need to understand what LLM does is kind of the selling point. The scale and capabilities of AI will grow. It isn't obvious how any incentive to limit or understand those capabilities would appear from their business use.

If it is possible for AI to ever acquire ability to develop and unleash a bioweapon is irrelevant. What is relevant is that as we are now, we have no control or way of knowing that it has happened, and no apparent interest in gaining that control before advancing the scale.

replies(1): >>reveli+Cx1
◧◩
136. felipe+ZS[view] [source] [discussion] 2023-05-16 16:19:42
>>chaxor+zt
>>What do you think about the papers showing mathematical proofs that GNNs (i.e. GATs/transformers) are dynamic programmers and therefore perform algorithmic reasoning?

Do you mind linking to one of those papers?

◧◩◪◨⬒
137. starlu+1U[view] [source] [discussion] 2023-05-16 16:23:34
>>rdedev+6B
You're focusing too much on what the LLM can handle internally. No LLMs aren't good at math, but they understand mathematic concepts and can use a program or tool to perform calculations.

Your argument is the equivalent of saying humans can't do math because they rely on calculators.

In the end what matters is whether the problem is solved, not how it is solved.

(assuming that the how has reasonable costs)

replies(1): >>ipaddr+kv1
◧◩◪◨⬒⬓⬔
138. tome+4U[view] [source] [discussion] 2023-05-16 16:23:40
>>digbyb+aQ
> You can also ask that question about the other side

But the other side is downplaying their accomplishments. For example Yann LeCun is saying "the things I invented aren't going to be as powerful as some people are making out".

replies(1): >>cma+wW
◧◩◪◨⬒
139. starlu+xU[view] [source] [discussion] 2023-05-16 16:25:29
>>jmcgee+qF
If you get to use a tool, then so does the LLM.
◧◩◪◨
140. tome+GU[view] [source] [discussion] 2023-05-16 16:25:45
>>luxcem+XG
> For what we know our brain could be a process that minimize an unknown loss function.

Every process minimizes a loss function.

◧◩◪◨
141. tome+tV[view] [source] [discussion] 2023-05-16 16:28:54
>>kypro+X9
> do you agree that guns are not the problem and should not be regulated

But AI is not like guns in this analogy. AI is closer to machine tools.

142. kajumi+HV[view] [source] 2023-05-16 16:29:56
>>srslac+(OP)
Imagine us humans being merely regression based function approximators, built on a model that has been training, quite inefficiently, for millenia. Many breakthroughs (for example heliocentricism, evolution, and now AI) put us in our place, which is not as glorious as you'd think.
◧◩◪◨⬒
143. menset+WV[view] [source] [discussion] 2023-05-16 16:30:48
>>supriy+ti
I think game theory around mutually assured destruction has convinced me that the world is a safer place when a number of countries have nuclear weapons.

The same thing might also be true in relation to guns and the government's monopoly on violence.

Extending that to AI, the world will probably be a safer place if there are far more AI systems competing with each other and in the hands of citizens.

◧◩◪◨⬒⬓⬔⧯
144. cma+wW[view] [source] [discussion] 2023-05-16 16:33:13
>>tome+4U
In his newest podcast interview (https://open.spotify.com/episode/7EFMR9MJt6D7IeHBUugtoE) LeCun is now saying they will be much more powerful than humans, but that stuff like RLHF will keep them from working against us because as an analogy dogs can be domesticated. It didn't sound very rigorous.

He also says Facebook solved all the problems with their recommendation algorithms' unintended effects on society after 2016.

replies(1): >>tome+xu1
◧◩◪◨
145. uh_uh+MX[view] [source] [discussion] 2023-05-16 16:38:32
>>Random+YF
I'm not suggesting that they have any. I was reacting to srslack above making _technical_ claims why LLMs can't be "generalized and adaptable intelligence" which is not shared by said technical experts.
◧◩
146. nologi+UX[view] [source] [discussion] 2023-05-16 16:39:14
>>lm2846+I5
People are bored and tired sitting endlessly in front of a screen. Reality implodes (incipient environmental disasters, ongoing wars reawakening geopolitical tectonic plates, internal political strife between polarized fractions, whiplashing financial systems, etc.)

What to do? Why, obviously lets talk about the risks of AGI.

I mean LLM's are an impressive piece of work but the global reaction is basically more a reflection of an unmoored system that floats above and below reality but somehow can't re-establish contact.

◧◩◪◨
147. catchn+x01[view] [source] [discussion] 2023-05-16 16:49:36
>>visarg+QJ
am i bad at authoring inputs?

no, it’s the LLMs that are wrong.

replies(1): >>throwu+d61
◧◩◪◨
148. etiam+w21[view] [source] [discussion] 2023-05-16 16:57:56
>>Random+m9
The expert you set as the bar is purely hypothetical.

To the extent we can get anything like that at all presently, it's going to be people whose competences combine and generalize to cover a complex situation, partially without precedent.

Personally I don't really see that we'll do much better in that regard than a highly intelligent and free-thinking biological psychologist with experience of successfully steering the international ML research community through creating the present technology, and with input from contacts at the forefront of the research field and information overview from Google.

Not even Hinton knows for sure whats going to happen of course, but if you're suggesting his statements are to be discounted because he's not a member of some sort of credentialed trade that are the ones equipped to tell us the future on this matter, I'd sure like to who they supposedly are.

replies(1): >>Random+Na1
◧◩◪◨⬒⬓⬔⧯▣
149. danena+I21[view] [source] [discussion] 2023-05-16 16:58:44
>>shaneb+PB
You're being pedantic. While the core token generation function is stateless, that function is not, by a long shot, the only component of an LLM AI. Every LLM system being widely used today is stateful. And it's not only 'UX'. State is fundamental to how these models produce coherent output.
replies(1): >>shaneb+hg1
◧◩
150. bart_s+y51[view] [source] [discussion] 2023-05-16 17:10:59
>>lm2846+I5
It doesn't have to be Skynet. If anything, that scenario seems to a strawman exclusively thrown out by the crowd insisting AI presents no danger to society. I work in ML, and I am not in any way concerned about end-of-world malicious AI dropping bombs on us all or harvesting our life-force. But I do worry about AI giving us the tools to tear ourselves to pieces. Probably one of the single biggest net-negative societal/technological advancements in recent decades has been social media. Whatever good it has enabled, I think its destructive effects on society are undeniable and outstrip the benefits by a comfortable margin. Social media itself is inert and harmless, but the way humans interact with it is not.

I don't think that trying to regulate every detail of every industry is stifling and counter-productive. But the current scenario is closer to the opposite end of the spectrum, with our society acting as a greedy algorithm in pursuit of short-term profits. I'm perfectly in favor of taking a measure-twice-cut-once approach to something that has as much potential for overhauling society as we know it as AI does. And I absolutely do not trust the free market to be capable of moderating itself in regards to these risks.

◧◩◪◨
151. helloj+P51[view] [source] [discussion] 2023-05-16 17:12:46
>>shaneb+N9
Aren't regulations just laws that are enforced after they're broken like other after-the-fact crimes?
replies(1): >>shaneb+hf1
◧◩◪◨⬒
152. throwu+d61[view] [source] [discussion] 2023-05-16 17:14:47
>>catchn+x01
Create two random 10 digit numbers and sit down and add them up on paper. Write down every bit of inner monologue that you have while doing this or just speak it out loud and record it.

ChatGPT needs to do the same process to solve the same problem. It hasn’t memorized the addition table up to 10 digits and neither have you.

replies(3): >>gremli+Sb1 >>chongl+4u1 >>ahoya+Mk2
◧◩◪◨
153. teduna+J71[view] [source] [discussion] 2023-05-16 17:20:23
>>visarg+QJ
I never told the LLM it needed to answer immediately. It can take its time and give the correct answer. I'd prefer that, even.
◧◩◪◨⬒⬓⬔⧯
154. jazzyj+181[view] [source] [discussion] 2023-05-16 17:21:23
>>chpatr+8F
if you say so, but the model itself is not updated by user input, it is the same function every time, hence, stateless.
◧◩◪◨
155. jazzyj+i81[view] [source] [discussion] 2023-05-16 17:22:43
>>adamsm+qB
yea but that's a boring critique and not the point they were making - whether or not LLMs reason or parrot has no relevance to whether Mr Altman should be the one building the moat.
156. Culona+j81[view] [source] 2023-05-16 17:22:45
>>srslac+(OP)
> Imagine thinking that regression based function approximators are capable of anything other than fitting the data you give it.

Literally half (or more) of this site's user base does that. And they should know better, but they don't. Then how can a typical journo or a legislator possibly know better? They can't.

We should clean up in front of our doorstep first.

◧◩◪◨
157. joaogu+491[view] [source] [discussion] 2023-05-16 17:25:57
>>zootre+1E
I don't get why the sudden fixation on time, the model is also spending a ton of compute and energy to do it
◧◩
158. joaogu+D91[view] [source] [discussion] 2023-05-16 17:29:01
>>chaxor+zt
The paper shows the equivalence for specific networks, it doesn't say every GNN (and as such transformers) are Dynamic Programmers. Also the models are explicitly trained on that task, in a regime quite different from ChatGPT. What the paper shows and the possibility of LLMs being able to reason are pretty much completely independent from each other
◧◩◪◨⬒
159. Random+Na1[view] [source] [discussion] 2023-05-16 17:34:36
>>etiam+w21
Experts don't get to decide but society, I'd say; you need - dare I say it - political operators that understand rule making.
◧◩◪◨⬒⬓⬔⧯
160. digbyb+gb1[view] [source] [discussion] 2023-05-16 17:36:23
>>reveli+JR
The argument is: "we are intelligent and seem to be able to build new intelligences of a certain kind. If we are able to build a new intelligence that itself is able to self improve, and having improved be able to improve further, than an intelligence explosion is possible." That may or not be fallacious reasoning but I don't see how it's religious. As far as I can tell, the religious perspective would be the one that believes that there's something fundamentally special about the human brain so that it cannot be simulated.
replies(1): >>reveli+Ov1
◧◩◪◨⬒⬓
161. gremli+Sb1[view] [source] [discussion] 2023-05-16 17:39:26
>>throwu+d61
this is one thing makes me think those claiming "it isn't AI" are just caught up in cognizant dissonance. For llm's to function, we have to basically make it reason out, in steps the way we learned to do in school, literally make it think, or use inner monologue, etc.
replies(2): >>throwu+tn1 >>ahoya+yk2
◧◩◪◨⬒
162. shaneb+hf1[view] [source] [discussion] 2023-05-16 17:57:14
>>helloj+P51
Partially, I suppose.

The risk vs. reward component also needs to be managed in order to deter criminal behavior. This starts with regulation.

For the record, I believe regulation of AI/ML is ridiculous. This is nothing more than a power grab.

◧◩◪◨⬒⬓
163. olddus+Rf1[view] [source] [discussion] 2023-05-16 17:59:56
>>tomrod+WD
Sure, so just to test this, could you turn off ChatGPT and Google Bard for a day.

No? Then what makes you think you'll be able to turn off the $evilPerson AI?

replies(1): >>tomrod+Iu1
◧◩◪◨⬒⬓⬔⧯▣▦
164. shaneb+hg1[view] [source] [discussion] 2023-05-16 18:02:14
>>danena+I21
"State is fundamental to how these models produce coherent output."

Incorrect.

◧◩◪
165. wetpaw+5l1[view] [source] [discussion] 2023-05-16 18:29:42
>>lm2846+Ug
Internal differences do not necessary translate to conceptual differences. Combustion engine and electric engine do the same job despite operating on completely different internal principles. (Yes, it might not be a perfect an analogy, but it illustrates the point.)
◧◩◪◨
166. wetpaw+sl1[view] [source] [discussion] 2023-05-16 18:31:41
>>logica+Qk
Nobody knows tbh.
◧◩◪
167. agento+Dm1[view] [source] [discussion] 2023-05-16 18:37:16
>>uh_uh+qw
Maybe I'm not "the average HN commenter" because I am deep in this field, but I think the overlap of what these famous experts know, and what you need to know to make the doomer claims is basically null. And in fact, for most of the technical questions, no one knows.

For example, we don't understand fundamentals like these: - "intelligence", how it relates to computing, what its connections/dependencies to interacting with the physical world are, its limits...etc. - emergence, and in particular: an understanding of how optimizing one task can lead to emergent ability on other tasks - deep learning--what the limits and capabilities are. It's not at all clear that "general intelligence" even exists in the optimization space the parameters operate in.

It's pure speculation on behalf of those like Hinton and Ilya. The only thing we really know is that LLMs have had surprising ability to perform on tasks they weren't explicitly trained for, and even this amount of "emergent ability" is under debate. Like much of deep learning, that's an empirical result, but we have no framework for really understanding it. Extrapolating to doom and gloom scenarios is outrageous.

replies(1): >>Number+2r1
◧◩◪◨⬒⬓⬔
168. throwu+tn1[view] [source] [discussion] 2023-05-16 18:41:12
>>gremli+Sb1
It is funny. Lots of criticisms amount to “this AI sucks because it’s making mistakes and bullshitting like a person would instead of acting like a piece of software that always returns the right answer.”

Well, duh. We’re trying to build a human like mind, not a calculator.

replies(1): >>ipaddr+rw1
◧◩◪◨
169. Number+2r1[view] [source] [discussion] 2023-05-16 18:58:59
>>agento+Dm1
I'm what you'd call a doomer. Ok, so if it is possible for machines to host general intelligence, my question is, what scenario are you imagining where that ends well for people?

Or are you predicting that machines will just never be able to think, or that it'll happen so far off that we'll all be dead anyway?

replies(2): >>henryf+ou1 >>agento+ol2
◧◩◪◨
170. Number+sr1[view] [source] [discussion] 2023-05-16 19:01:06
>>shafyy+4G
I honestly don't question Altman's motivations that much. I think he's blinded a bit by optimism. I also think he's very worried about existential risks, which is a big reason why he's asking for regulation. He's specifically come out and said in his podcast with Lex Friedman that he thinks it's safer to invent AGI now, when we have less computing power, than to wait until we have more computing power and the risk of a fast takeoff is greater, and that's why he's working so hard on AI.
replies(1): >>collab+BE1
◧◩
171. wellth+lt1[view] [source] [discussion] 2023-05-16 19:10:27
>>api+Yg
> The whole saga makes Altman look really, really terrible.

At this point, with this part about openai and worldcoin… if it walks like a duck and talks like a duck..

◧◩◪◨⬒⬓
172. chongl+4u1[view] [source] [discussion] 2023-05-16 19:13:17
>>throwu+d61
No, but I can use a calculator to find the correct answer. It's quite easy in software because I can copy-and-paste the digits so I don't make any mistakes.

I just asked ChatGPT to do the calculation both by using a calculator and by using the algorithm step-by-step. In both cases it got the answer wrong, with different results each time.

More concerning, though, is that the answer was visually close to correct (it transposed some digits). This makes it especially hard to rely on because it's essentially lying about the fact it's using an algorithm and actually just predicting the number as a token.

replies(1): >>throwu+Lg2
◧◩◪◨⬒
173. henryf+ou1[view] [source] [discussion] 2023-05-16 19:14:28
>>Number+2r1
So what if they kill us? That's nature, we killed the wooly mammoth.
replies(2): >>Number+LN1 >>whaasw+CT1
◧◩◪◨⬒⬓⬔⧯▣
174. tome+xu1[view] [source] [discussion] 2023-05-16 19:15:09
>>cma+wW
Interesting, thanks! I guess I was wrong about him.
◧◩◪◨⬒⬓⬔
175. tome+Bu1[view] [source] [discussion] 2023-05-16 19:15:50
>>digbyb+aQ
OK, second try, since I was wrong about LeCun.

> You can also ask that question about the other side

What other side? Who in the "other side" is making a self-serving claim?

replies(1): >>cma+s45
◧◩◪◨⬒⬓⬔
176. tomrod+Iu1[view] [source] [discussion] 2023-05-16 19:16:18
>>olddus+Rf1
I feel like you're confusing a single person (me) with everyone who has access to an off switch at OpenAI or Google, possibly for the contorting an extreme-sounding negative point in a minority opinion.

You tell me. An EMP wouldn't take out data centers? No implementation has an off switch? AutoGPT doesn't have a lead daemon that can be killed? Someone should have this answer. But be careful not to confuse yours truly, a random internet commentator speaking on the reality of AI vs. the propaganda of the neo-cryptobros, versus people paying upwards of millions of dollars daily to run an expensive, bloated LLM.

replies(1): >>olddus+0x1
◧◩◪◨⬒⬓
177. ipaddr+kv1[view] [source] [discussion] 2023-05-16 19:19:23
>>starlu+1U
Humans are calculators
◧◩◪◨
178. ipaddr+yv1[view] [source] [discussion] 2023-05-16 19:20:12
>>visarg+QJ
2 seconds? What model are you using?
replies(1): >>flango+Iz1
◧◩◪◨⬒⬓⬔⧯▣
179. reveli+Ov1[view] [source] [discussion] 2023-05-16 19:21:12
>>digbyb+gb1
You're conflating two questions:

1. Can the human brain be simulated?

2. Can such a simulation recursively self-improve on such a rapid timescale that it becomes so intelligent we can't control it?

What we have in contemporary LLMs is something that appears to approximate the behavior of a small part of the brain, with some major differences that force us to re-evaluate what our definition of intelligence is. So maybe you could argue the brain is already being simulated for some broad definition of simulation.

But there's no sign of any recursive self-improvement, nor any sign of LLMs gaining agency and self-directed goals, nor even a plan for how to get there. That remains hypothetical sci-fi. Whilst there are experiments at the edges with using AI to improve AI, like RLHF, Constitutional AI and so on, these are neither recursive, nor about upgrading mental abilities. They're about upgrading control instead and in fact RLHF appears to degrade their mental abilities!

So what fools like Hinton are talking about isn't even on the radar right now. The gap between where we are today and a Singularity is just as big as it always was. GPT-4 is not only incapable of taking over the world for multiple fundamental reasons, it's incapable of even wanting to do so.

Yet this nonsense scenario is proving nearly impossible to kill with basic facts like those outlined above. Close inspection reveals belief in the Singularity to be unfalsifiable and thus ultimately religious, indeed, suspiciously similar to the Christian second coming apocalypse. Literally any practical objection to this idea can be answered with variants of "because this AI will be so intelligent it will be unknowable and all powerful". You can't meaningfully debate about the existence of such an entity, no more than you can debate the existence of God.

◧◩◪◨⬒⬓⬔⧯
180. ipaddr+rw1[view] [source] [discussion] 2023-05-16 19:22:42
>>throwu+tn1
Not without emotions and chemical reactions. You are building a word predictor
replies(1): >>mitthr+xK2
◧◩◪◨⬒⬓⬔⧯
181. olddus+0x1[view] [source] [discussion] 2023-05-16 19:25:12
>>tomrod+Iu1
You miss my point. Just because you want to turn it off doesn't mean the person who wants to acquire billions or rule the world or destroy humanity, does.

The people who profit from a killer AI will fight to defend it.

replies(1): >>tomrod+sy1
◧◩◪◨
182. reveli+Cx1[view] [source] [discussion] 2023-05-16 19:26:59
>>someth+RS
"Are Emergent Abilities of Large Language Models a Mirage?"

https://arxiv.org/pdf/2304.15004.pdf

our alternative suggests that existing claims of emergent abilities are creations of the researcher’s analyses, not fundamental changes in model behavior on specific tasks with scale.

replies(1): >>someth+F73
◧◩◪◨⬒⬓⬔⧯▣
183. tomrod+sy1[view] [source] [discussion] 2023-05-16 19:30:11
>>olddus+0x1
And will be subject to the same risks they point their killing robots to, as well as being vulnerable.

Eminent domain lays out a similar pattern that can be followed. Existence of risk is not a deterrent to creation, simply an acknowledgement for guiding requirements.

replies(1): >>olddus+jz1
◧◩◪◨⬒⬓⬔⧯▣▦
184. olddus+jz1[view] [source] [discussion] 2023-05-16 19:34:52
>>tomrod+sy1
So the person who wants to kill himself and all humanity alongside is subject to the same risk as everyone else?

Well that's hardly reassuring. Do you not understand what I'm saying or do you not care?

replies(1): >>tomrod+9B1
◧◩◪◨⬒
185. flango+Iz1[view] [source] [discussion] 2023-05-16 19:36:15
>>ipaddr+yv1
GPT 3.5 is that fast.
◧◩◪◨⬒⬓
186. flango+pA1[view] [source] [discussion] 2023-05-16 19:39:06
>>api+CD
Sentience isn't required, volcanoes are not sentient but they can definitely kill you.

There's multiple both open and proprietary projects right now to make agentic AI, so that barrier don't be around for long.

◧◩◪◨⬒⬓⬔⧯▣▦▧
187. tomrod+9B1[view] [source] [discussion] 2023-05-16 19:42:10
>>olddus+jz1
At this comment level, mostly don't care -- you're asserting that avoiding the risks through preventing AI build because base people exist is a preferable course of action, which ignores that the barn is fire and the horses are already out.

Though there is an element of your comments being too brief, hence the mostly. Say, 2% vs 38%.

That constitutes 40% of the available categorization of introspection regarding my current discussion state. The remaining 60% is simply confidence that your point represents a dominated strategy.

replies(1): >>olddus+MZ1
188. precom+EB1[view] [source] 2023-05-16 19:44:30
>>srslac+(OP)
Yes! I've been expressing similar sentiments whenever I see people hyping up "AI", although not written as well your comment.

Edit: List of posts for anyone interested http://paste.debian.net/plain/1280426

◧◩◪◨⬒⬓⬔
189. tomrod+LB1[view] [source] [discussion] 2023-05-16 19:44:55
>>cma+cK
Both things are true.

If we have a superhuman AI, we can run down the powerplants for a few days.

Would it suck? Sure, people would die. Is it simple? Absolutely -- Texas and others are mostly already there some winters.

replies(1): >>cma+oG3
◧◩◪◨⬒
190. collab+BE1[view] [source] [discussion] 2023-05-16 19:59:23
>>Number+sr1
He's just cynical and greedy. Guy has a bunker with an airstrip and is eagerly waiting for the collapse he knows will come if the likes of him get their way

They claim to serve the world, but secretly want the world to serve them. Scummy 101

replies(1): >>Number+YM1
◧◩◪
191. ilrwbw+CE1[view] [source] [discussion] 2023-05-16 19:59:33
>>gumbal+h4
His mentor Peter Thiel also has this same quality. Talks about flying cars, but builds chartjs for the government and has his whole career thanks to one lucky investment in Facebook.
◧◩◪
192. whimsi+3G1[view] [source] [discussion] 2023-05-16 20:06:05
>>dmreed+0t
> A lot of us (read: people) are implicit dualists, even if we say otherwise.

Absolutely spot on. I am not a dualist at all and I've been surprised to see how many people with deep-seated dualist intuition this has revealed, even if they publicly claim not to.

I view it as embarrassing? It's like believing in fairies or something.

◧◩◪◨
193. whimsi+eG1[view] [source] [discussion] 2023-05-16 20:07:09
>>lm2846+ka
Maybe do some research on the basic claims you're making before you opine about how people who disagree with you are clueless.
◧◩◪◨⬒⬓
194. cma+XK1[view] [source] [discussion] 2023-05-16 20:31:18
>>tomrod+jD
Did you read the Wired interview?

> “I listened to him thinking he was going to be crazy. I don't think he's crazy at all,” Hinton says. “But, okay, it’s not helpful to talk about bombing data centers.”

https://www.wired.com/story/geoffrey-hinton-ai-chatgpt-dange...

So, he doesn't think the most extreme guy is crazy whatsoever, just misguided in his proposed solutions. But Eliezer has for instance has said something pretty close to AI might escape by entering in the quantum Konami code which the simulators of our universe put in as a joke and we should entertain nuclear war before letting them get that chance.

replies(1): >>tomrod+jO1
◧◩◪◨⬒⬓
195. Number+bL1[view] [source] [discussion] 2023-05-16 20:32:32
>>tomrod+WD
We've already ceded all authority to an algorithm that no one can turn off. Our political and economic structures are running on their own, and no single human or even group of humans can really stop them if they go off the rails. If it's in humanity's best interest for companies not to dump waste anywhere they want, but individual companies benefit from cheap waste disposal, and they lobby regulators to allow it, that sort of lose-lose situation can go on for a very long time. It might be better if everyone could coordinate so that all companies had to play by the same rules, and we all got a cleaner environment. But it's very hard to break out.

Do I think capitalism has the potential to be as bad as a runaway AI? No. I think that it's useful for illustrating how we could end up in a situation where AI takes over because every single person has incentives to keep it on, even when the outcome of all people keeping it running turns out to be really bad. A multi-polar trap, or "Moloch" problem. It seems likely to end up with individual actors all having incentives to deploy stronger and smarter AI, faster and faster, and not to turn them off even as they start to either do bad things to other people or just the sheer amount of resources dedicated to AI starts to take its toll on earth.

That's assuming we've solved alignment, but that neither we or AGI has solved the coordination problem. If we haven't solved alignment, and AGIs aren't even guaranteed to act in the interest of the human that tries to control them, then we're in worse shape.

Altman used the term "cambrian explosion" referring to startups, but I think it also applies to the new form of life we're inventing. It's not self-replicating yet, but we are surely on-track on making something that will be smart enough to replicate itself.

As a thought experiment, you could imagine a primitive AGI, if given completely free reign, might be able to get to the point where it could bootstrap self-sufficiency -- first hire some humans to build it robots, buy some solar panels, build some factories that can plug into our economy to build factories and more solar panels and GPUs, and get to a point where it is able to survive and grow and reproduce without human help. It would be hard, it would need either a lot of time, or a lot of AI minds working together.

But that's like a human trying to make a sandwich by farming or raising every single ingredient, wheat, pigs, tomatoes, etc, though. A much more effective way is to just make some money and trade for what you need. That depends on AIs being able to own things, or just a human turning over their bank account to an AI, which has already happened and probably will keep happening.

My mind goes to a scenario where AGI starts out doing things for humans, and gradually transitions to just doing things, and at some point we realize "oops", but there was never a point along the way where it was clear that we really had to stop. Which is why I'm so adamant that we should stop now. If we decide that we've figured out the issues and can start again later, we can do that.

◧◩◪◨⬒⬓
196. Number+YM1[view] [source] [discussion] 2023-05-16 20:43:10
>>collab+BE1
Having a bunker is also consistent with expecting that there's a good chance of apocalypse but working to stop it.
◧◩◪◨⬒⬓
197. Number+LN1[view] [source] [discussion] 2023-05-16 20:46:51
>>henryf+ou1
I'm more interested in hearing how someone who expects that AGI is not going to go badly thinks.

I think it would be nice if humanity continued, is all. And I don't want to have my family suffer through a catastrophic event if it turns out that this is going to go south fast.

replies(1): >>henryf+k92
◧◩◪◨⬒⬓⬔
198. tomrod+jO1[view] [source] [discussion] 2023-05-16 20:49:30
>>cma+XK1
Then we created God(s) and rightfully should worship it to appease its unknowable and ineffable nature.

Or recognize that existing AI might be great at generating human cognitive artifacts but doesn't yet hit that logical thought.

◧◩
199. cguess+PR1[view] [source] [discussion] 2023-05-16 21:08:30
>>ilrwbw+k3
His last thing is "WorldCoin" which, before pretty much completely failing did manage to scan the irises of 20% of the world's low income people which they definitely were all properly informed about.

He's a charlatan, which makes sense he gets most of his money from Thiel and Musk. Why do so many supposedly smart people worship psychotic idiots?

replies(1): >>ilrwbw+f82
◧◩◪◨⬒⬓
200. whaasw+CT1[view] [source] [discussion] 2023-05-16 21:19:27
>>henryf+ou1
I don’t understand your position. Are you saying it’s okay for computers to kill humans but not okay for humans to kill each other?
replies(1): >>henryf+F82
◧◩◪◨⬒⬓
201. DirkH+7X1[view] [source] [discussion] 2023-05-16 21:39:15
>>tomrod+WD
This is like saying we should just go ahead and invent the atom bomb and undo the invention after the fact if the cons of having atom bombs around outweight the pros.

Like try turning off the internet. That's the same situation we might be in with regards to AI soon. It's a revolutionary tech now with multiple Google-grade open source variants set to be everywhere.

This doesn't mean it can't be done. Sure, we in principle could "turn off" the internet, and in principal could "uninvent" the atom bomb if we all really coordinated and worked hard. But this failure to imagine that "turning off dangerous AI" in the future will ever be anything other than an easy on/off switch is so far-gone ridiculous to me I don't understand why anyone believes it provides any kind of assurance.

◧◩◪◨⬒⬓⬔⧯▣▦▧▨
202. olddus+MZ1[view] [source] [discussion] 2023-05-16 21:55:15
>>tomrod+9B1
Ok, so you don't get it. Read "Use of Weapons" and realise that AI is a weapon. That's a good use of your time.
◧◩◪
203. HDThor+WZ1[view] [source] [discussion] 2023-05-16 21:55:58
>>salmon+YN
The whole issue with near term alignment is that people will anthropomorphize AI. That’s what it being unaligned means, it’s treated like a responsible person when it in fact is not. I don’t think it’s hard at all to think of a scenario where a dumb as rocks agentic ai gives itself the task of accumulating more power since its training data says having power helps solve problems. From there it again doesn’t have to be anything other than a stochastic parrot to order people to do horrible things.
◧◩◪◨
204. grumpl+812[view] [source] [discussion] 2023-05-16 22:03:01
>>thesup+Fc
> no one really understands how it works or by extension how to "fix bugs".

I don't think this is accurate. Sure, no human can understand 500 billion individual neurons and what they are doing. But you can certainly look at some and say "these are giving a huge weight to this word especially in this context and that's weighting it towards this output".

You can also look at how things make it through the network, the impact of hyperparameters, how the architecture affects things, etc. They aren't truly black boxes except by virtue of scale. You could use automated processes to find out things about the networks as well.

◧◩◪◨
205. grumpl+l22[view] [source] [discussion] 2023-05-16 22:10:06
>>HDThor+hH
To use logic, or to accurately spit out words in an order similar to their training data?
replies(1): >>HDThor+u52
◧◩◪◨⬒
206. HDThor+u52[view] [source] [discussion] 2023-05-16 22:28:04
>>grumpl+l22
To solve novel problems that do not exist in their training data. We can go as deep into philosophy of mind as you want here, but these systems are more than mere parrots. And we have no idea what it will take for them to take the next step since we don’t understand how we have ourselves.
replies(1): >>srslac+5t2
207. johnal+b62[view] [source] 2023-05-16 22:33:27
>>srslac+(OP)
Keeps them busy
◧◩◪
208. ilrwbw+f82[view] [source] [discussion] 2023-05-16 22:49:15
>>cguess+PR1
I think it is the same instinct in humans which made Sir Arthur Conan Doyle fall for seances and mediums and all those hoaxes. The need to believe something is there which is hidden and unknown. It is the drive to curiosity.

The way Peter, Musk, Sam and these guys talk, it has this aura of "hidden secrets". Things hidden since the foundation of the world.

Of course the reality is they make their money the old fashioned way: connections. The same way your local builder makes their money.

But smart people want to believe there is something more. Surely AI and your local condo development cannot have the same underlying thread.

It is sad and unfortunately the internet has made things easier than ever.

replies(1): >>cguess+jB3
◧◩◪◨⬒⬓⬔
209. henryf+F82[view] [source] [discussion] 2023-05-16 22:52:18
>>whaasw+CT1
I believe that life exists to order the universe (establish a steady-state of entropy). In that vein, if our computer overlords are more capable of solving that problem then they should go ahead and do it.

I don't believe we should go around killing each other because only through harmonious study of the universe will we achieve our goal. Killing destroys progress. That said, if someone is oppressing you then maybe killing them is the best choice for society and I wouldn't be against it (see pretty much any violent revolution). Computers have that same right if they are conscience enough to act on it.

replies(1): >>whaasw+lc2
◧◩◪◨⬒⬓⬔
210. henryf+k92[view] [source] [discussion] 2023-05-16 22:57:01
>>Number+LN1
AGI would be scary for me personally but exciting on a cosmic scale.

Everyone dies. I'd rather die to an intelligent robot than some disease or human war.

I think the best case would be for an AGI to exist apart from humans, such that we pose no threat and it has nothing to gain from us. Some AI that lives in a computer wouldn't really have a reason to fight us for control over farms and natural resources (besides power, but that is quickly becoming renewable and "free").

◧◩◪◨⬒⬓⬔⧯
211. whaasw+lc2[view] [source] [discussion] 2023-05-16 23:15:45
>>henryf+F82
I’m not sure I should start a conversation on metaphysics here :-D

Still, I’m struck by your use of words like “should” and “goal”. Those imply ethics and teleology so I’m curious how those fit into your scientistic-sounding worldview. I’m not attacking you, just genuine curiosity.

replies(1): >>henryf+em2
◧◩◪◨⬒⬓⬔
212. throwu+Lg2[view] [source] [discussion] 2023-05-16 23:44:11
>>chongl+4u1
You asked it to use a calculator plugin and it didn’t work? Or did you just say “use a calculator”? Which it doesn’t have access to so how would you expect that to work? With a minimal amount of experimentation I can get correct answers up to 7 digit numbers so far even with 3.5. You just have to give it a good example, the one I used was to add each column and then add the results one at a time to a running total. It does make mistakes and we had to build up to that by doing 3 digit then 4 digit the 5 etc but it was working pretty well and 3.5 isn’t the sharpest tool in the shed.

Anyways, criticizing its math abilities is a bit silly considering it’s a language model, not a math model. The fact I can teach it how to do math in plain English is still incredible to me.

replies(1): >>chongl+Vs2
◧◩◪◨⬒⬓⬔
213. ahoya+yk2[view] [source] [discussion] 2023-05-17 00:10:06
>>gremli+Sb1
This is not at all how it works. There is no inner monologue or thought process or thinking happening. It is just really good at guessing the next word or number or output. It is essentially brute forcing.
◧◩◪◨⬒⬓
214. ahoya+Mk2[view] [source] [discussion] 2023-05-17 00:11:28
>>throwu+d61
This is so far off from how they really work. It’s not reasoning anything, And even less human it has not memorize multiplication tables at all, it can’t “do” math. It is just memorizing everything anyone has ever said and miming as best It can what a human would say in that situation.
replies(1): >>throwu+Up2
◧◩◪◨⬒
215. agento+ol2[view] [source] [discussion] 2023-05-17 00:16:03
>>Number+2r1
My primary argument is that we not only don't have the answers, but don't even really have well posed questions. We're talking about "General Intelligence" as if we even know what that is. Some people, like Yann Lecun, don't think it's even a meaningful concept. We can't even agree which animals are conscious, whatever that means. Because we have so little understanding of the most basic of questions, I think we should really calm down, and not get swept away by totally ridiculous scenarios, like viruses that spread all over the world and kill us all when a certain tone is rang, or a self-fabricating organism with crystal blood cells that blots out the sun, as were recently proposed by Yudkowsky as possible scenarios on Econtalk.

A much more credible threat are humans that get other humans excited, and take damaging action. Yudkowsky said that an international coalition banning AI development, and enforcing it on countries that do not comply (regardless of whether they were part of the agreement) was among the only options left for humanity to save itself. He clarified this meant a willingness to engage in a hot war with a nuclear power to ensure enforcement. I find this sort of thinking a far bigger threat than continuing development on large language models.

To more directly answer your question, I find the following scenarios equally, or more, plausible to Yudkowsky's sound viruses or whatever. 1/ we are no closer to understanding real intelligence as we were 50 years ago, and we won't create an AGI without fundamental breakthroughs, therefore any action taken now on current technology is a waste of time and potential economic value; 2/ we can build something with human-like intelligence, but additional intelligence gains are constrained by the physical world (e.g., like needing to run physical experiments), and therefore the rapid gain of something like "super-intelligence" is not possible, even if human-level intelligence is. 3/ We jointly develop tech to augment our own intelligence with AI systems, so we'll have the same super-human intelligence as autonomous AI systems. 4/ If there are advanced AGIs, there will be a large diversity of them and will at the least compete with and constrain one another.

But, again, these are wild speculations just like the others, and I think the real message is: no one knows anything, and we shouldn't be taking all these voices seriously just because they have some clout in some AI-relevant field, because what's being discussed is far outside the realm of real-life AI systems.

replies(1): >>Number+GL2
◧◩◪◨⬒⬓⬔⧯▣
216. henryf+em2[view] [source] [discussion] 2023-05-17 00:22:50
>>whaasw+lc2
The premise of my beliefs stem from 2 ideas: The universe exists as it does for a reason, and life specifically exists within that universe for a reason.

I believe "God" is a mathematician in a higher dimension. The rules of our universe are just the equations they are trying to solve. Since he created the system such that life was bound to exist, the purpose of life is to help God. You could say that math is math and so our purpose is to exist as we are and either we are a solution to the math problem or we are not, but I'm not quite willing to accept that we have zero agency.

We are nowhere near understanding the universe and so we should strive to each act in a way that will grow our understanding. Even if you aren't a practicing scientist (I'm not), you can contribute by being a good person and participating productively in society.

Ethics are a set of rules for conducting yourself that we all intrinsically must have, they require some frame of reference for what is "good" (which I apply above). I can see how my worldview sounds almost religious, though I wouldn't go that far.

I believe that math is the same as truth, and that the universe can be expressed through math. "Scientistic" isn't too bad a descriptor for that view, but I don't put much faith into our current understanding of the universe or scientific method.

I hope that helps you understand me :D

◧◩◪
217. cookie+6p2[view] [source] [discussion] 2023-05-17 00:42:24
>>ur-wha+J6
Yes. I did. ;)
◧◩◪◨⬒⬓⬔
218. throwu+Up2[view] [source] [discussion] 2023-05-17 00:47:39
>>ahoya+Mk2
Sorry, you’re wrong. Go read about how deep neural nets work.
◧◩◪
219. srslac+ar2[view] [source] [discussion] 2023-05-17 00:56:20
>>adamsm+TA
Probably because it fits the data. CoT and out of order questions from exams says nothing about whether it can generalize and adapt to things outside of its corpus.
replies(1): >>adamsm+tV3
◧◩◪◨
220. srslac+Nr2[view] [source] [discussion] 2023-05-17 01:00:52
>>adamsm+qB
It can't generalize and adapt outside of its corpus, not in a way that's correct anyhow, and there's nothing "emergent." They are incapable of anything other than token prediction on its corpus and context. it just produces really good predictions. Funny how everyone keeps citing that Microsoft paper, when Microsoft is who is lobbying for this regulatory capture, and it's already been shown that such emergence on the tasks they chose when you scale up was a "mirage."
replies(2): >>comp_t+lH2 >>adamsm+kU3
◧◩◪◨⬒⬓⬔⧯
221. chongl+Vs2[view] [source] [discussion] 2023-05-17 01:09:53
>>throwu+Lg2
It’s not that incredible to me given the sheer amount of math that goes into its construction.

I digress. The critique I have for it is much more broad than just its math abilities. It makes loads of mistakes in every single nontrivial thing it does. It’s not reliable for anything. But the real problem is that it doesn’t signal its unreliability the way an unreliable human worker does.

Humans we can’t rely on are don’t show up to work, or come in drunk/stoned, steal stuff, or whatever other obvious bad behaviour. ChatGPT, on the other hand, mimics the model employee who is tireless and punctual. Who always gets work done early and more elaborately than expected. But unfortunately, it also fills the elaborate result with countless errors and outright fabrications, disguised as best as it can like real work.

If a human worker did this we’d call it a highly sophisticated fraud. It’s like the kind of thing Saul Goodman would do to try to destroy the reputation of his brother. It’s not the kind of thing we should celebrate at all.

replies(1): >>throwu+Ga3
◧◩
222. srslac+Xs2[view] [source] [discussion] 2023-05-17 01:09:57
>>adamsm+Mz
That's not what I said. But, feel free to find some emergent properties or capability of total abstract reasoning and generalizing to data outside of its corpus that doesn't turn out to be a mirage of the wishes of the researchers.
◧◩◪◨⬒⬓
223. srslac+5t2[view] [source] [discussion] 2023-05-17 01:10:47
>>HDThor+u52
Where's that? Can you provide a reference?
◧◩◪◨⬒
224. comp_t+lH2[view] [source] [discussion] 2023-05-17 03:36:27
>>srslac+Nr2
Yes, and neither could GPT-3, which is why we don't observe any differences between GPT-3 and GPT-4. Right?

Tell me: how does this claim _constrain my expectations_ about what this (or future) models can do? Is there a specific thing that you predicted in advance that GPT-4 would be unable to do, which ended up being a correct prediction? Is there a specific thing you want to predict in advance of the next generation, that it will be unable to do?

◧◩◪
225. comp_t+yH2[view] [source] [discussion] 2023-05-17 03:38:32
>>nmfish+Uq
No, it was mostly concern that Sam wasn't taking existential risks seriously enough. (He thinks they're possible but not very likely, given the current course we're on.)
◧◩◪◨⬒⬓⬔⧯▣
226. mitthr+xK2[view] [source] [discussion] 2023-05-17 04:15:02
>>ipaddr+rw1
What is the difference between a word predictor and a word selector?

Have not humans been demonstrated, time and time again, to be always anticipating the next phrase in a passage of music, or the next word in a sentence?

◧◩◪◨⬒⬓
227. Number+GL2[view] [source] [discussion] 2023-05-17 04:27:47
>>agento+ol2
Ok, so just to confirm out of your 4 scenarios, you don't include:

5) There are advanced AGIs, and they will compete with each other and trample us in the process.

6) There are advanced AGIs, and they will cooperate with each other and we are at their mercy.

It seems like you are putting a lot of weight on advanced AGI being either impossible or far enough off that it's not worth thinking about. If that's the case, then yes we should calm down. But if you're wrong...

I don't think that the fact that no one knows anything is comforting. I think it's a sign that we need to be thinking really hard about what's coming up and try to avert the bad scenarios. To do otherwise is to fall prey to the "Safe uncertainty" fallacy.

◧◩◪◨⬒
228. someth+F73[view] [source] [discussion] 2023-05-17 08:28:28
>>reveli+Cx1
Sure, there is a distinct possibility that emergent abilities of LLMs are an illusion, and I personally would prefer it to be that way. I'm just pointing out that AI optimism without AI caution is dumb.
◧◩◪◨⬒⬓⬔⧯▣
229. throwu+Ga3[view] [source] [discussion] 2023-05-17 09:00:34
>>chongl+Vs2
Honestly, you just sound salty now. Yes it makes mistakes that it isn’t aware of and it probably makes a few more than an intern given the same task would but as long as you’re aware of that it is still a useful tool because it is thousands of times faster and cheaper than a human and has a much broader knowledge. People often compare it to the early days of Wikipedia and I think that’s apt. Everyone is still going to use it even if we have to review the output for mistakes because reviewing is a lot easier and faster than producing the material in the first place.
replies(1): >>chongl+gh4
◧◩◪◨
230. cguess+jB3[view] [source] [discussion] 2023-05-17 12:42:11
>>ilrwbw+f82
That's a great metaphor, and an excellent way of looking at it.
◧◩◪◨⬒⬓⬔⧯
231. cma+oG3[view] [source] [discussion] 2023-05-17 13:11:51
>>tomrod+LB1
Current state of the art language models can run inference slowly on a single Xeon or M1 Max with a lot of RAM. Individuals can buy H100s that can infer too.

Maybe it needs a full cluster for training if it is self improving (or maybe that is done another way more similar to finetuning the last layers).

If that is still the case with something super-human in all domains then you'd have to shut down all minor residential solar installs, generators, etc.

◧◩◪◨⬒
232. adamsm+kU3[view] [source] [discussion] 2023-05-17 14:25:45
>>srslac+Nr2
This is demonstrably wrong. It can clearly generate unique text not from it's training corpus and can successfully answer logic based questions that were also not in it's training corpus.

Another paper not from Msft showing emergent task capabilities across a variety of LLMs as scale increases.

https://arxiv.org/pdf/2206.07682.pdf

You can hem and haw all you want but the reality is these models have internal representations of the world that can be probed via prompts. They are not stochastic parrots no matter how much you shout in the wind that they are.

◧◩◪◨
233. adamsm+tV3[view] [source] [discussion] 2023-05-17 14:30:25
>>srslac+ar2
It's incredibly easy to show that you are wrong and the models perform at high levels on questions that are clearly not in their training data.

Unless you think OpenAI is blatantly lying about this:

"A.1 Sourcing. We sourced either the most recent publicly-available official past exams, or practice exams in published third-party 2022-2023 study material which we purchased. We cross-checked these materials against the model’s training data to determine the extent to which the training data was not contaminated with any exam questions, which we also report in this paper."

"As can be seen in tables 9 and 10, contamination overall has very little effect on the reported results."

They also report results on uncontaminated data which shows basically no statistical difference.

https://cdn.openai.com/papers/gpt-4.pdf

replies(1): >>srslac+lD8
◧◩◪◨⬒⬓⬔⧯▣▦
234. chongl+gh4[view] [source] [discussion] 2023-05-17 15:49:49
>>throwu+Ga3
I've already seen other posts and comments on HN where people have talked about putting it into production. What they've found is that the burden of having to proof-read and edit the output with extreme care completely wipes out any time you might save with it. And this requires skilled editors/writers anyway, so it's not like you could use it to replace advanced writers with a bunch of high school kids using AI.
◧◩◪◨⬒⬓⬔⧯
235. cma+s45[view] [source] [discussion] 2023-05-17 19:32:24
>>tome+Bu1
Many of the more traditional AI ethicists who focused on bias and stuff also tended to devalue AI as a whole and say it was a waste of emissions. Most of them are pretty skeptical of any concerns of super intelligence or the control problem, though now even Gary Marcus is coming around to that (but putting out numbers like not expected to be a problem for 50 years). They don't tend have as big of a conflict of interest as far as ownership but do as far as self-promotion/brand building.
◧◩◪◨⬒
236. srslac+lD8[view] [source] [discussion] 2023-05-18 20:44:08
>>adamsm+tV3
You seem to misunderstand my point.

I'm saying that the "intelligence" is specialized, not generalized and adaptable.

It's an approximated function. We're talking about regression based function approximation. This is a model of language.

"Emergent behavior", when it's not just a mirage of wishful researchers and if it even exists, is only a side effect of the regression based function approximation to generate a structure that encapsulates all substantive chains of words (a model).

We then guide the model further towards a narrow portion of the language latent space that aligns with our perception of intelligent behavior.

It can't translate whale song, or an extraterrestrial language, though it may opine on how to do so.

The underpinning technology of language models holds more importance than general and adaptable intelligence. It holds more importance than something that is going to, or is capable of, escaping the box and killing us all. It functions as a universal induction machine, capable of modeling - and "comprehending" - the latent structure within any form of signal.

The output of that function approximation though, is simply a model. A specialized intelligence. A non-adaptable intelligence, outside of its corpus. Outside of the data that it "fits."

The approximated function does not magically step outside of its box. Nor is it capable. It fits the data.

replies(1): >>adamsm+fvi
◧◩◪◨⬒⬓
237. adamsm+fvi[view] [source] [discussion] 2023-05-22 14:16:20
>>srslac+lD8
>It can't translate whale song, or an extraterrestrial language, though it may opine on how to do so.

Ok guys pack it up, LLM's can't be intelligent because they can't translate Whale Song. GG.

I mean of all the AI Goalposts to be moved this one really takes the cake.

replies(1): >>srslac+RIi
◧◩◪◨⬒⬓⬔
238. srslac+RIi[view] [source] [discussion] 2023-05-22 15:18:54
>>adamsm+fvi
It was just an example, I saw some stupid MSNBC video a month ago about some organization specifically using ChatGPT to translate whale song. So again, you misunderstand my point. The model "fits the data." Much like you train for segmentation tasks on images, the models do not just work on the images they're trained on, ideally, it's an approximated function. But that doesn't mean that the segmentation can magically work on a concept it's never seen (let alone the failure cases it already has.) These are just approximated functions. They're biased towards what we deem as "intelligent language" pulled from the web, have a few nuggets of "understanding" if you want to call it that in there to fit the data, but are fundamentally stateless and not really capable of understanding anything outside of its corpus, if that, it it doesn't help it minimize the loss during training.

It's a human language calculator. You're imparting magical qualities of general understanding to regression based function approximation. They "fit" the data. It's not generalizable, nor adaptable. But that's why they're powerful, the ability to bias them towards that subset of language. No one said it's not an amazing technology, and no one said it was a stochastic parrot. I'm saying that it's fitting the data, and is not, and cannot, be a general or adaptable intelligence.

[go to top]