zlacker

[parent] [thread] 9 comments
1. iliane+(OP)[view] [source] 2023-05-16 13:29:16
> Why is it so hard to hear this perspective? Like, genuinely curious.

Because people have different definition of what intelligence is. Recreating the human brain in a computer would definitely be neat and interesting but you don't need that nor AGI to be revolutionary.

LLMs, as perfect Chinese Rooms, lack a mind or human intelligence but demonstrate increasingly sophisticated behavior. If they can perform tasks better than humans, does their lack of "understanding" and "thinking" matter?

The goal is to create a different form of intelligence, superior in ways that benefit us. Planes (or rockets!) don't "fly" like birds do but for our human needs, they are effectively much better at flying that birds ever could be.

replies(2): >>srslac+j3 >>api+J4
2. srslac+j3[view] [source] 2023-05-16 13:44:57
>>iliane+(OP)
That changes nothing on the hyping of science fiction "risk" of those intelligences "escaping the box" and killing us all.

The argument for regulation in that case would be because of the socio-economic risk of taking people's jobs, essentially.

So, again: pure regulatory capture.

replies(1): >>iliane+i8
3. api+J4[view] [source] 2023-05-16 13:51:12
>>iliane+(OP)
I have a chain saw that can cut better than me, a car that can go faster, a computer that can do math better, etc.

We've been doing this forever with everything. Building tools is what makes us unique. Why is building what amounts to a calculator/spreadsheet/CAD program for language somehow a Rubicon that cannot be crossed? Did people freak out this much about computers replacing humans when they were shown to be good at math?

replies(2): >>iliane+Ra >>adamsm+qk
◧◩
4. iliane+i8[view] [source] [discussion] 2023-05-16 14:10:13
>>srslac+j3
There's no denying this is regulatory capture by OpenAI to secure their (gigantic) bag and that the "AI will kill us all" meme is not based in reality and plays on the fact that the majority of people do not understand LLMs.

I was simply explaining why I believe your perspective is not represented in the discussions in the media, etc. If these models were not getting incredibly good at mimicking intelligence, it would not be possible to play on people's fears of it.

◧◩
5. iliane+Ra[view] [source] [discussion] 2023-05-16 14:21:24
>>api+J4
> Why is building what amounts to a calculator/spreadsheet/CAD program for language somehow a Rubicon that cannot be crossed?

We've already crossed it and I believe we should go full steam ahead, tech is cool and we should be doing cool things.

> Did people freak out this much about computers replacing humans when they were shown to be good at math?

Too young but I'm sure they did freak out a little! Computers have changed the world and people have internalized computers as being much better/faster at math but exhibiting creativity, language proficiency and thinking is not something people thought computers were supposed to do.

◧◩
6. adamsm+qk[view] [source] [discussion] 2023-05-16 15:07:52
>>api+J4
You've never had a tool that is potentially better than you or better than all humans at all tasks. If you can't see why that is different then idk what to say.
replies(2): >>api+kn >>freedo+jp
◧◩◪
7. api+kn[view] [source] [discussion] 2023-05-16 15:21:03
>>adamsm+qk
LLMs are better than me at rapidly querying a vast bank of language-encoded knowledge and synthesizing it in the form of an answer to or continuation of a prompt... in the same way that Mathematica is vastly better than me at doing the mechanics of math and simplifying complex functions. We build tools to amplify our agency.

LLMs are not sentient. They have no agency. They do nothing a human doesn't tell them to do.

We may create actual sentient independent AI someday. Maybe we're getting closer. But not only is this not it, but I fail to see how trying to license it will prevent that from happening.

replies(2): >>iliane+sw >>flango+7k1
◧◩◪
8. freedo+jp[view] [source] [discussion] 2023-05-16 15:29:37
>>adamsm+qk
> or better than all humans at all tasks.

I work in tech too and don't want to lose my job and have to go back to blue collar work, but there's a lot of blue collar workers who would find that a pretty ridiculous statement and there is plenty of demand for that work these days.

◧◩◪◨
9. iliane+sw[view] [source] [discussion] 2023-05-16 15:56:42
>>api+kn
I don't think we need sentient AI for it to be autonomous. LLMs are powerful cognitive engines and weak knowledge engines. Cognition on its own does not allow them to be autonomous, but because they can use tools (APIs, etc.) they are able to have some degree of autonomy when given a task and can use basic logic to follow them through/correct their mistakes.

AutoGPTs and the likes are much overhyped (it's early tech experiments after all) and have not produced anything of value yet but having dabbled with autonomous agents, I definitely see a not so distant future when you can outsource valuable tasks to such systems.

◧◩◪◨
10. flango+7k1[view] [source] [discussion] 2023-05-16 19:39:06
>>api+kn
Sentience isn't required, volcanoes are not sentient but they can definitely kill you.

There's multiple both open and proprietary projects right now to make agentic AI, so that barrier don't be around for long.

[go to top]