zlacker

[parent] [thread] 16 comments
1. Chicag+(OP)[view] [source] 2023-05-16 12:32:56
Why is it so hard to hear this perspective? Like, genuinely curious. This is the first I hear of someone cogently putting this thought out there, but it seems rather painfully obvious -- even if perhaps incorrect, but certainly a perspective that is very easy to comprehend and one that merits a lot of discussion. Why is it almost nonexistent? I remember even in the hay day of crypto fever you'd still have A LOT of folks to provide counterarguments/differing perspectives, but with AI these seem to be rather extremely muted.
replies(5): >>bombca+Y >>srslac+R4 >>iliane+Ea >>dmreed+Wn >>adamsm+ov
2. bombca+Y[view] [source] 2023-05-16 12:37:31
>>Chicag+(OP)
Crypto had more direct ways to scam people so others would speak against it.

Those nonplussed by this wave of AI are just yawning.

3. srslac+R4[view] [source] 2023-05-16 12:58:56
>>Chicag+(OP)
I'm not against machine learning, I'm against regulatory capture of it. It's an amazing technology. It still doesn't change the fact that they're just function approximators that are trained to minimize loss on a dataset.
replies(1): >>luxcem+jB
4. iliane+Ea[view] [source] 2023-05-16 13:29:16
>>Chicag+(OP)
> Why is it so hard to hear this perspective? Like, genuinely curious.

Because people have different definition of what intelligence is. Recreating the human brain in a computer would definitely be neat and interesting but you don't need that nor AGI to be revolutionary.

LLMs, as perfect Chinese Rooms, lack a mind or human intelligence but demonstrate increasingly sophisticated behavior. If they can perform tasks better than humans, does their lack of "understanding" and "thinking" matter?

The goal is to create a different form of intelligence, superior in ways that benefit us. Planes (or rockets!) don't "fly" like birds do but for our human needs, they are effectively much better at flying that birds ever could be.

replies(2): >>srslac+Xd >>api+nf
◧◩
5. srslac+Xd[view] [source] [discussion] 2023-05-16 13:44:57
>>iliane+Ea
That changes nothing on the hyping of science fiction "risk" of those intelligences "escaping the box" and killing us all.

The argument for regulation in that case would be because of the socio-economic risk of taking people's jobs, essentially.

So, again: pure regulatory capture.

replies(1): >>iliane+Wi
◧◩
6. api+nf[view] [source] [discussion] 2023-05-16 13:51:12
>>iliane+Ea
I have a chain saw that can cut better than me, a car that can go faster, a computer that can do math better, etc.

We've been doing this forever with everything. Building tools is what makes us unique. Why is building what amounts to a calculator/spreadsheet/CAD program for language somehow a Rubicon that cannot be crossed? Did people freak out this much about computers replacing humans when they were shown to be good at math?

replies(2): >>iliane+vl >>adamsm+4v
◧◩◪
7. iliane+Wi[view] [source] [discussion] 2023-05-16 14:10:13
>>srslac+Xd
There's no denying this is regulatory capture by OpenAI to secure their (gigantic) bag and that the "AI will kill us all" meme is not based in reality and plays on the fact that the majority of people do not understand LLMs.

I was simply explaining why I believe your perspective is not represented in the discussions in the media, etc. If these models were not getting incredibly good at mimicking intelligence, it would not be possible to play on people's fears of it.

◧◩◪
8. iliane+vl[view] [source] [discussion] 2023-05-16 14:21:24
>>api+nf
> Why is building what amounts to a calculator/spreadsheet/CAD program for language somehow a Rubicon that cannot be crossed?

We've already crossed it and I believe we should go full steam ahead, tech is cool and we should be doing cool things.

> Did people freak out this much about computers replacing humans when they were shown to be good at math?

Too young but I'm sure they did freak out a little! Computers have changed the world and people have internalized computers as being much better/faster at math but exhibiting creativity, language proficiency and thinking is not something people thought computers were supposed to do.

9. dmreed+Wn[view] [source] 2023-05-16 14:33:08
>>Chicag+(OP)
Because it reads as relatively naive and a pretty old horse in the debate of sentience

I'm all for villainizing the figureheads of the current generation of this movement. The politics of this sea-change are fascinating and worthy of discussion.

But out-of-hand dismissal of what has been accomplished smacks more to me of lack of awareness of the history of the study of the brain, cognition, language, and computers, than it does of a sound debate position.

◧◩◪
10. adamsm+4v[view] [source] [discussion] 2023-05-16 15:07:52
>>api+nf
You've never had a tool that is potentially better than you or better than all humans at all tasks. If you can't see why that is different then idk what to say.
replies(2): >>api+Yx >>freedo+Xz
11. adamsm+ov[view] [source] 2023-05-16 15:09:40
>>Chicag+(OP)
>Why is it so hard to hear this perspective?

Because it's wrong and smart people know that.

◧◩◪◨
12. api+Yx[view] [source] [discussion] 2023-05-16 15:21:03
>>adamsm+4v
LLMs are better than me at rapidly querying a vast bank of language-encoded knowledge and synthesizing it in the form of an answer to or continuation of a prompt... in the same way that Mathematica is vastly better than me at doing the mechanics of math and simplifying complex functions. We build tools to amplify our agency.

LLMs are not sentient. They have no agency. They do nothing a human doesn't tell them to do.

We may create actual sentient independent AI someday. Maybe we're getting closer. But not only is this not it, but I fail to see how trying to license it will prevent that from happening.

replies(2): >>iliane+6H >>flango+Lu1
◧◩◪◨
13. freedo+Xz[view] [source] [discussion] 2023-05-16 15:29:37
>>adamsm+4v
> or better than all humans at all tasks.

I work in tech too and don't want to lose my job and have to go back to blue collar work, but there's a lot of blue collar workers who would find that a pretty ridiculous statement and there is plenty of demand for that work these days.

◧◩
14. luxcem+jB[view] [source] [discussion] 2023-05-16 15:34:33
>>srslac+R4
> It still doesn't change the fact that they're just function approximators that are trained to minimize loss on a dataset.

That fact does not entail what theses models can or cannot do. For what we know our brain could be a process that minimize an unknown loss function.

But more importantly, what SOTA is now does not predict what it will be in the future. What we know is that there is rapid progress in that domain. Intelligence explosion could be real or not, but it's foolish to ignore its consequences because current AI models are not that clever yet.

replies(1): >>tome+2P
◧◩◪◨⬒
15. iliane+6H[view] [source] [discussion] 2023-05-16 15:56:42
>>api+Yx
I don't think we need sentient AI for it to be autonomous. LLMs are powerful cognitive engines and weak knowledge engines. Cognition on its own does not allow them to be autonomous, but because they can use tools (APIs, etc.) they are able to have some degree of autonomy when given a task and can use basic logic to follow them through/correct their mistakes.

AutoGPTs and the likes are much overhyped (it's early tech experiments after all) and have not produced anything of value yet but having dabbled with autonomous agents, I definitely see a not so distant future when you can outsource valuable tasks to such systems.

◧◩◪
16. tome+2P[view] [source] [discussion] 2023-05-16 16:25:45
>>luxcem+jB
> For what we know our brain could be a process that minimize an unknown loss function.

Every process minimizes a loss function.

◧◩◪◨⬒
17. flango+Lu1[view] [source] [discussion] 2023-05-16 19:39:06
>>api+Yx
Sentience isn't required, volcanoes are not sentient but they can definitely kill you.

There's multiple both open and proprietary projects right now to make agentic AI, so that barrier don't be around for long.

[go to top]