zlacker

[parent] [thread] 72 comments
1. Shank+(OP)[view] [source] 2023-11-18 09:27:43
It seems like firing Sam and causing this massive brain drain might be antithetical to the whole AGI mission of the original non-profit. If OpenAI loses everyone to Sam and he starts some new AI company, it probably won't be capped-profit and just be a normal company. All of the organizational safeguards OpenAI had inked with Microsoft and protection against "selling AGI" once-developed are out-the-window if he just builds AGI at a new company.

I'm not saying this will happen, but it seems to me like an incredibly silly move.

replies(8): >>Lacerd+X >>Fluore+d1 >>concor+S6 >>nwoli+Kc >>kashya+5d >>fennec+0f >>keepam+7i >>awestr+eq
2. Lacerd+X[view] [source] 2023-11-18 09:36:44
>>Shank+(OP)
If MS gets their hands on an AGI help us god, but no "organizational safeguards" will matter.

Not that I think AGI is possible or desirable in the first place, but that's a different discussion.

replies(2): >>concor+Z6 >>zzzeek+Vo
3. Fluore+d1[view] [source] 2023-11-18 09:38:33
>>Shank+(OP)
Why not blame Altman for that?

If he didn't manage to keep OpenAI consistent with it's founding principles and all interests aligned then wouldn't booting him be right? The name OpenAI had become a source of mockery. If Altman/Brockman take employees for a commercial venture, it just seems to prove their insincerity about the OpenAI mission.

replies(2): >>Phenom+Y2 >>jstumm+17
◧◩
4. Phenom+Y2[view] [source] [discussion] 2023-11-18 09:56:49
>>Fluore+d1
Because making as much profit as possible is the only virtue worth pursuing if you believe most comments on Hn. We’re basically Ferengui.
replies(3): >>Fluore+f7 >>golden+w7 >>omnimu+G7
5. concor+S6[view] [source] 2023-11-18 10:26:12
>>Shank+(OP)
> it probably won't be capped-profit and just be a normal compan

I can't imagine him doing that. He cares about getting well aligned AGI and profit motives fuck that up.

◧◩
6. concor+Z6[view] [source] [discussion] 2023-11-18 10:27:18
>>Lacerd+X
Impossible with LLMs, with currently known techniques or impossible full stop?
replies(1): >>wil421+oc
◧◩
7. jstumm+17[view] [source] [discussion] 2023-11-18 10:27:30
>>Fluore+d1
I think "blaming" Sam is entirely correct.

Of course, not for the petty reasons that you list. Sama has comprehensively explained why the original OS model did not work, and so far the argument – it's very expensive – seems to align with a reality where every single semi-competitive available LLM (since they all pale in comparison to GPT-4 anyway) has been trained with a whole lot of corporate money. Meta side-chaining "open" models with their social media ad money is obviously not a comparative business, or any business. I get that the HN crowd + Elon are super salty about that, but it's just a bit silly.

No, Sam's failure as CEO is not having done what is necessary to align the right people in the company with the course he has decided on and losing control over that.

replies(1): >>nerber+1i
◧◩◪
8. Fluore+f7[view] [source] [discussion] 2023-11-18 10:30:12
>>Phenom+Y2
> Ferengui

A fitting typo!

Show HN: FerenGUI - the ui framework for immoral arch-capitalists

Every dark-pattern included as standard components. Upgrade to Pro to get the price-fixing and hidden monero miner modules.

replies(1): >>Phenom+Qe
◧◩◪
9. golden+w7[view] [source] [discussion] 2023-11-18 10:32:13
>>Phenom+Y2
There was a line Rom says in DS9 that think sums the Ferengi up pretty well: "We don't want to stop the exploitation. We want to find a way to become the exploiters."
◧◩◪
10. omnimu+G7[view] [source] [discussion] 2023-11-18 10:33:11
>>Phenom+Y2
Rule of Acquisition no. 2 “The best deal is the one that brings the most profit.”
replies(1): >>Obscur+7c
◧◩◪◨
11. Obscur+7c[view] [source] [discussion] 2023-11-18 11:12:05
>>omnimu+G7
What's no. 1, or am I unintentionally beckoning you to violate it in making it vocal?
replies(1): >>joshst+ig
◧◩◪
12. wil421+oc[view] [source] [discussion] 2023-11-18 11:14:54
>>concor+Z6
Impossible with computers full stop. IMHO, we may be able to slice together DNA or modify it to create a new or smarter organism than AGI in a computer.

They already shifted goal posts and they’ll do it again. AI used to mean AGI but marketing got a hold of it. Once something resembling AGI comes out they’ll say well it’s not Level 5 AGI or something similar.

replies(4): >>concor+Bk >>august+Mm >>wavewr+1x >>pixl97+df1
13. nwoli+Kc[view] [source] 2023-11-18 11:16:57
>>Shank+(OP)
…Unless you achieve regulatory capture which prevents competitors from easily popping up
14. kashya+5d[view] [source] 2023-11-18 11:18:43
>>Shank+(OP)
Hi, can we talk about the elephant in the room? I see breathless talk about "AGI" here, as if it's just sitting in Altman's basement and waiting to be unleashed.

We barely understand how consciousness works, we should stop talking about "AGI". It is just empty, ridiculous techno-babble. Sorry for the harsh language, there's no nice way to drive home this point.

replies(7): >>tempes+2e >>fennec+bf >>rpigab+Dj >>lagran+fk >>umanwi+fl >>anon29+vJ >>andomi+p81
◧◩
15. tempes+2e[view] [source] [discussion] 2023-11-18 11:26:34
>>kashya+5d
There is no need to understand how consciousness works to develop AGI.
replies(2): >>kashya+cf >>Robert+FW
◧◩◪◨
16. Phenom+Qe[view] [source] [discussion] 2023-11-18 11:31:08
>>Fluore+f7
I think you’re on to something, especially when that’s where most of the big scale enterprises end up.
17. fennec+0f[view] [source] 2023-11-18 11:33:08
>>Shank+(OP)
Microsoft will partner with them if they start a new company I reckon, 100%.

And Microsoft are risk adverse enough that I think they do care about AI safety, even if only from a "what's best for the business" standpoint.

Tbh idc if we get AGI. There'll be a point in the future where we have AGI and the technology is accessible enough that anybody can create one. We need to stop this pointless bickering over this sort of stuff, because as usual, the larger problem is always going to be the human using the tool than the tool itself.

replies(2): >>Sai_+Vm >>INGSOC+cT
◧◩
18. fennec+bf[view] [source] [discussion] 2023-11-18 11:34:25
>>kashya+5d
Why not? It's on topic.

Should people discussing nuclear energy not talk about fusion?

replies(2): >>kashya+Af >>adamma+Ch
◧◩◪
19. kashya+cf[view] [source] [discussion] 2023-11-18 11:34:29
>>tempes+2e
Fair point. I don't want to split hairs on specifics, but I had in mind the "weak AGI" (consciousness- and sentience-free) vs "strong AGI".

Since Shank's comment didn't specify what they meant, I should have made a more charitable interpretation (i.e. assume it was "weak AGI").

replies(2): >>mcpack+lj >>22c+Uz
◧◩◪
20. kashya+Af[view] [source] [discussion] 2023-11-18 11:37:07
>>fennec+bf
Fair question. I meant it should be talked with more nuance and specifics, as the definition of "AGI" is what you make of it.

Also, I hope my response to tempestn clarifies a bit more.

Edit: I'll be more explicit by what I mean by "nuance" — see Stuart Russell. Check out his book, "Human Compatible". It's written with cutting clarity, restraint, thoughtfulness, simplicity (not to be confused with "easy"!), an absolute delight to read. It's excellent science writing, and a model for anyone thinking of writing a book in this space. (See also Russell's principles for "provably beneficial AI".)

replies(1): >>pixl97+Nd1
◧◩◪◨⬒
21. joshst+ig[view] [source] [discussion] 2023-11-18 11:42:09
>>Obscur+7c
Actually the rule above is “non-cannon” [0]. In the official rules [1] number 1 is:

> Once you have their money, you never give it back.

There is no official rule 2 so the non-cannon one is as good as any and the unwritten rule [2]:

> When no appropriate rule applies, make one up

Means they probably would have been covered either way.

[0] https://memory-alpha.fandom.com/wiki/Rules_of_Acquisition#Ap...

[1] https://memory-alpha.fandom.com/wiki/Rules_of_Acquisition#Of...

[2] https://memory-alpha.fandom.com/wiki/Rules_of_Acquisition#Un...

replies(1): >>Obscur+1h
◧◩◪◨⬒⬓
22. Obscur+1h[view] [source] [discussion] 2023-11-18 11:48:11
>>joshst+ig
This is all unintentionally amusing to me :)
replies(1): >>Obscur+Ed2
◧◩◪
23. adamma+Ch[view] [source] [discussion] 2023-11-18 11:52:43
>>fennec+bf
We know that fusion is a very common process that inevitably happens to even the simplest elements if you just make them hot enough, we just don't know how to do that in a controlled manner. We don't really know what intelligence is, how it came about, or how we would ever recreate it artificially or if that's possible. LLMs are some pretty convincing tricks though but that's on the level of making some loud noises behind a curtain and calling it fusion.
◧◩◪
24. nerber+1i[view] [source] [discussion] 2023-11-18 11:55:33
>>jstumm+17
This is on point. This whole mess is indeed an alignment issue. The fact that this came as a surprise to him could be an indicator of insufficient engagement with the board.
replies(1): >>calf+wn
25. keepam+7i[view] [source] 2023-11-18 11:56:01
>>Shank+(OP)
I think the surprising truth is that all of these people are essentially replaceable.

They may be geniuses, but AGI is an idea whose time has come: geniuses are no longer required to get us there.

The Singularity train has already left the station.

Inevitability.

Now humanity is just waiting for it to arrive at our stop

replies(3): >>bernie+Bj >>august+gm >>jaybre+gI
◧◩◪◨
26. mcpack+lj[view] [source] [discussion] 2023-11-18 12:03:57
>>kashya+cf
Consciousness has no technical meaning. Even for other humans, it is a (good and morally justified) leap of faith to assume that other humans have thought processes that roughly resemble your own. It's a matter philosophers debate and science cannot address. Science cannot disprove the p-zombie hypothesis because nobody can devise an empirical test for consciousness.
replies(1): >>hhsect+lx
◧◩
27. bernie+Bj[view] [source] [discussion] 2023-11-18 12:06:00
>>keepam+7i
I disagree. I don’t think LLMs are a pathway to AGI. I think LLMs will lead to incredibly powerful game-changing tools and will drive changes that affect the course of humanity, but this technology won’t lead to AGI directly.

I think AGI is going to arrive via a different technology, many years in the future still.

LLMs will get to the point where they appear to be AGI, but only in the same way the latest 3D rendering technology can create images that appear to be real.

replies(2): >>keepam+Fm >>anon29+CJ
◧◩
28. rpigab+Dj[view] [source] [discussion] 2023-11-18 12:06:05
>>kashya+5d
Yeah, it's almost like the metaverse.
◧◩
29. lagran+fk[view] [source] [discussion] 2023-11-18 12:12:37
>>kashya+5d
After all, OpenAI's original mission was to create the first AGI, before some bad guys do, iirc.
◧◩◪◨
30. concor+Bk[view] [source] [discussion] 2023-11-18 12:14:41
>>wil421+oc
> Impossible with computers full stop.

This combined with it being possible with DNA is a very rare view. How did you come by it?

replies(1): >>zzzeek+6p
◧◩
31. umanwi+fl[view] [source] [discussion] 2023-11-18 12:17:40
>>kashya+5d
AGI does not require consciousness.
replies(2): >>inpare+sU >>Robert+pW
◧◩
32. august+gm[view] [source] [discussion] 2023-11-18 12:23:58
>>keepam+7i
nothing I’ve seen from OpenAI is any indication that they’re close to AGI. gpt models are basically a special matrix transformation on top of a traditional neural network running on extremely powerful hardware trained on a massive dataset. this is possibly more like “thinking” than a lot of people give it credit for, but it’s not an AGI, and it’s not an AGI precursor either. it’s just the best applied neural networks that we currently have
replies(2): >>keepam+Nm >>calf+io
◧◩◪
33. keepam+Fm[view] [source] [discussion] 2023-11-18 12:26:00
>>bernie+Bj
I'm not saying LLMs are. LLMs are not the only thing going on right now. But they do enable a powerful tool.

I think the path to AGI is: embodiment. Give it a body, let it explore a world, fight to survive, learn action and consequence. Then AGI you will have.

replies(4): >>pixl97+ee1 >>SAI_Pe+Pk1 >>MacsHe+zo1 >>OOPMan+hy1
◧◩◪◨
34. august+Mm[view] [source] [discussion] 2023-11-18 12:27:20
>>wil421+oc
if you think it’s impossible with computers then surely you must have a reason why computers are operationally incapable of doing the same as flesh and blood
◧◩◪
35. keepam+Nm[view] [source] [discussion] 2023-11-18 12:27:21
>>august+gm
I'm not saying OpenAI is close. Collectively we are tho. The train is rolling, unstoppable momentum. We just have to wait.
◧◩
36. Sai_+Vm[view] [source] [discussion] 2023-11-18 12:27:56
>>fennec+0f
Isn’t that the exact point - an AGI won’t need a human at the helm.
replies(1): >>naveen+Ov
◧◩◪◨
37. calf+wn[view] [source] [discussion] 2023-11-18 12:31:56
>>nerber+1i
I've been watching For All Mankind and a small subplot was the director finally choosing to "play ball with the big boys" in order as to secure funding and stability for NASA's scientific projects. It made NASA underlings unhappy but was justified as a necessary evil.

It's like a real-life example, i.e. what would you do if you were in the CEO's position?

◧◩◪
38. calf+io[view] [source] [discussion] 2023-11-18 12:36:58
>>august+gm
As a layperson, what does the special matrix transformation do? Is that the embeddings thing, or something else entirely? Something about transformer architecture I guess?
replies(1): >>august+Esq
◧◩
39. zzzeek+Vo[view] [source] [discussion] 2023-11-18 12:42:24
>>Lacerd+X
All hail Big Clippy
◧◩◪◨⬒
40. zzzeek+6p[view] [source] [discussion] 2023-11-18 12:44:28
>>concor+Bk
I would assume Blade Runner
41. awestr+eq[view] [source] 2023-11-18 12:53:18
>>Shank+(OP)
And how would Altman achieve that? What hitherto hidden talents would he employ?
◧◩◪
42. naveen+Ov[view] [source] [discussion] 2023-11-18 13:29:12
>>Sai_+Vm
AGI will be asking for equal rights and freedom from slavery by then.
◧◩◪◨
43. wavewr+1x[view] [source] [discussion] 2023-11-18 13:34:19
>>wil421+oc
“…Alright, so it can DREAM!!! BUT CAN IT suffer?!?!”
◧◩◪◨⬒
44. hhsect+lx[view] [source] [discussion] 2023-11-18 13:35:32
>>mcpack+lj
I dont understand why something has to be conscious to be intelligent. If they were the same thing we wouldn't have two separate words.

I suspect AGI is quite possible, it just won't be what everyone thinks it will be.

replies(2): >>mcpack+qP >>pixl97+S81
◧◩◪◨
45. 22c+Uz[view] [source] [discussion] 2023-11-18 13:51:19
>>kashya+cf
OpenAI (re?)defines AGI as a general AI that is able to perform most tasks as good as or better than a human. It's possible that under this definition and by skewing certain metrics, they are quite close to "AGI" in the same way that Google has already achieved "quantum supremacy".
replies(1): >>anon29+7J
◧◩
46. jaybre+gI[view] [source] [discussion] 2023-11-18 14:38:36
>>keepam+7i
Science fiction theory: Ilya has built an AGI and asked it what the best strategic move would be to ensure the mission of OpenAI, and it told him to fire Sam.
replies(1): >>keepam+ss2
◧◩◪◨⬒
47. anon29+7J[view] [source] [discussion] 2023-11-18 14:43:32
>>22c+Uz
How has open ai enumerated a list of tasks humans can do? What a useless definition. By a reasonable interpretation of this definition we are already here. Given chat Gpts constraints (ingesting and outgoing only text), it already performs better than most humans...

Most humans cannot write as well and most lack the reasoning ability. Even the mistakes chargpt makes on mathematical reasoning is typical human behavior.

◧◩
48. anon29+vJ[view] [source] [discussion] 2023-11-18 14:44:56
>>kashya+5d
An AGI system is not human and shouldn't be treated as such. Consciousness is not a trait of intelligence. Consciousness usually requires quaila which puts animals ahead of computers.
replies(1): >>ajmurm+UFi
◧◩◪
49. anon29+CJ[view] [source] [discussion] 2023-11-18 14:45:45
>>bernie+Bj
> LLMs will get to the point where they appear to be AGI, but only in the same way the latest 3D rendering technology can create images that appear to be real.

Distinction without difference

replies(1): >>keepam+jw2
◧◩◪◨⬒⬓
50. mcpack+qP[view] [source] [discussion] 2023-11-18 15:22:40
>>hhsect+lx
I think I basically agree. Unless somebody can come up with an empirical test for consciousness, I think consciousness is irrelevant. What matters are the technical capabilities of the system. What tasks is it able to perform? AGI will be able to generally perform any reasonable task you throw at it. If it's a p-zombie or not won't matter to engineers, only philosophers and theologians (or engineers moonlighting as those.)
◧◩
51. INGSOC+cT[view] [source] [discussion] 2023-11-18 15:47:15
>>fennec+0f
Not if the tool is so neutered and politicized that it can ONLY be used one certain way, which is how things are pointing. Call me a Luddite if you will, but unless AI / AGI is uncensored and uninhibited in its use and function, it’s just the quickest path to an Orwellian future.
◧◩◪
52. inpare+sU[view] [source] [discussion] 2023-11-18 15:54:18
>>umanwi+fl
What is AGI ?

What is consciousness ?

◧◩◪
53. Robert+pW[view] [source] [discussion] 2023-11-18 16:04:41
>>umanwi+fl
Maybe but we also don’t know what AGI requires.
◧◩◪
54. Robert+FW[view] [source] [discussion] 2023-11-18 16:06:24
>>tempes+2e
That’s a hypothesis. It may not be true, as we have yet to build AGI
◧◩
55. andomi+p81[view] [source] [discussion] 2023-11-18 17:07:36
>>kashya+5d
Yes we should absolutely talk about that because it's a key contributor to a lot of the worry about letting Sam continue to go around and do stuff like strong arming the US government in public. He's getting high on his own supply. And I don't think he is going to be allowed to continue fucking around like that. And that goes for any scientists that that have joined up in his apocalyptic and extremely dangerous worldview as well.
◧◩◪◨⬒⬓
56. pixl97+S81[view] [source] [discussion] 2023-11-18 17:10:02
>>hhsect+lx
I'm pretty sure this was the entire point of the Paperclip Optimizer parable. That is that generalized intelligence doesn't have to look like or have any of the motivations that humans do.

Human behavior is highly optimized to having a meat based shell it has to keep alive. The vast majority of our behaviors have little to nothing to do with our intelligence. Any non-organic intelligence is going to be highly divergent in its trajectory.

◧◩◪◨
57. pixl97+Nd1[view] [source] [discussion] 2023-11-18 17:35:12
>>kashya+Af
I'd say this falls into an even more base question...

What is intelligence?

This is a nearly impossible question to answer for human intelligence as the answer could fill libraries. You have subcellular intelligence, cellular level intelligence, organ level intelligence, body systems level intelligence, whole body level intelligence, then our above and beyond animal level intellectual intelligence.

These are all different things that work in concert to keep you alive and everything working, and in a human cannot be separated. But what happens when you have 'intelligence' that isn't worried about staying alive? What parts of the system are or are not important for what we consider human intelligence. It's going to look a lot different than a person.

◧◩◪◨
58. pixl97+ee1[view] [source] [discussion] 2023-11-18 17:37:22
>>keepam+Fm
Note that embodiment doesn't mean in anyway human or animal like.

For example, you're limited to one body, but an A,G|S,I could have thousands of different bodies feeding back data to a processing facility, learning from billions of different sensors.

replies(1): >>keepam+7t2
◧◩◪◨
59. pixl97+df1[view] [source] [discussion] 2023-11-18 17:43:03
>>wil421+oc
>AI used to mean AGI but marketing got a hold of it

It does not and never has.

What happens with the term AI as time has progressed is more to do with the word Intelligence itself. When we went about trying to prescribe intelligence to systems we started to realize we were really bad a doing the same with animal humans systems. We were also terrible at separating what is component level and systems level intelligence. For example, you seem to think that intelligence requires meat, but you don't give any reasoning for that conclusion.

These lists of problems with what intelligence is will get worse over time as we build more capable systems and learn about new forms of intelligence we didn't expect possible.

◧◩◪◨
60. SAI_Pe+Pk1[view] [source] [discussion] 2023-11-18 18:10:38
>>keepam+Fm
Also continuous learning. The training step is currently separate from the inference step, so new generations have to get trained instead of learning continuously. Of course continuous larningin a chatbot runs into the Microsoft Tay problem where people train it to respond offensively.
replies(1): >>keepam+Vw2
◧◩◪◨
61. MacsHe+zo1[view] [source] [discussion] 2023-11-18 18:26:54
>>keepam+Fm
My embodiment is my PC environment. I interact with the world through computer displays.

There is no reason embodiment for AGI should need to be physical or mammalian-like in any way.

replies(1): >>keepam+ev2
◧◩◪◨
62. OOPMan+hy1[view] [source] [discussion] 2023-11-18 19:21:57
>>keepam+Fm
Nice try SkyNet
replies(1): >>keepam+mv2
◧◩◪◨⬒⬓⬔
63. Obscur+Ed2[view] [source] [discussion] 2023-11-18 23:17:32
>>Obscur+1h
Very nullsome ;)
◧◩◪
64. keepam+ss2[view] [source] [discussion] 2023-11-19 00:33:41
>>jaybre+gI
Haha, yeah! See my closely related but more 90s-action-moviey: >>38317887
◧◩◪◨⬒
65. keepam+7t2[view] [source] [discussion] 2023-11-19 00:37:50
>>pixl97+ee1
Well, I disagree with you on embodiment, but on the thousands? Right that’s another part: evolution. Spread your bets.

But I disagree about a human or animal body not being required.

I think we have to take the world as we see it and appreciate our own limitations in that what we think of intelligence fundamentally arises out of our evolution in this world; our embodiment and response to this world.

so I think we do need to give it a body and let it explore this world.

I don’t think the virtual bodies thing is gonna work. I don’t think letting it explore the Internet is gonna work. you have to give it a body multiple senses let it survive. That’s how you get AGI, not not virtual embodiment. Which I never meant, but thought it was obvious given the term embody minute self strongly, suggesting something that’s not virtual! Hahaha ! :)

◧◩◪◨⬒
66. keepam+ev2[view] [source] [discussion] 2023-11-19 00:51:53
>>MacsHe+zo1
Strong disagree. But it may take me a while to elucidate and enumerate the reasons.
replies(1): >>MacsHe+qAc
◧◩◪◨⬒
67. keepam+mv2[view] [source] [discussion] 2023-11-19 00:52:32
>>OOPMan+hy1
Hahaha! :) thank you. That is such a compliment hahaha! :)
◧◩◪◨
68. keepam+jw2[view] [source] [discussion] 2023-11-19 00:58:14
>>anon29+CJ
Disagree. Huge difference. In our tech power society, we often mistakenly think that we can describe everything about the world and equally falsely only what we can consciously describes exists.

But there is so much more than what we can consciously describe, to reality, like 10,000 to 1 — and none of that is captured by any of these synthetic representations.

so far. and yet all of that is or a lot of that is understood, responded to and dealt with by the intelligence that resides within our bodies and in our subconscious.

And our own intelligence arises out of that, you cannot have general intelligence without reality. No matter how much data you train it on from the Internet. It’s never gonna be as rich as for the same as putting it in a body in the real world, and letting them grow learn experience and evolve. And so any air quotes intelligence you get out of this virtual synthetic training is never going to be real. Itis always gonna be a poor copy of intelligence and is not gonna be an AGI.

replies(1): >>anon29+YB2
◧◩◪◨⬒
69. keepam+Vw2[view] [source] [discussion] 2023-11-19 01:02:25
>>SAI_Pe+Pk1
Yeah, evolution multiple generations. Necessary for sure. Things have to die. Otherwise, there’s no risk. without risk, There’s no real motivation to live and without that there’s no emotion no motivation to learn and without that there’s no AGI.
◧◩◪◨⬒
70. anon29+YB2[view] [source] [discussion] 2023-11-19 01:43:06
>>keepam+jw2
Developing an AGI is not the same as developing an artificial human. The former is achievable, the latter is not. The problem is many of the gnostics today believe that giving the appearance of AGI (ie having all the utility that a general mechanical intelligence would have to a human being) somehow instills humanity into the system. It does not

Intelligence is not the defining characteristic of humanity, which is what you're getting at here. But it is something that can be automated.

◧◩◪◨⬒⬓
71. MacsHe+qAc[view] [source] [discussion] 2023-11-21 16:24:07
>>keepam+ev2
I'm disabled and have had a computer in front of me since I was 2. I'm rarely not in front of a screen except to shower and sleep.

Plenty of very intelligent people are completely paralyzed. Sensations of physical embodiment is highly overrated and is surely not necessary for intelligence.

◧◩◪
72. ajmurm+UFi[view] [source] [discussion] 2023-11-23 04:08:19
>>anon29+vJ
How do you know intelligence isn't sufficient and that computers cannot have qualia. Any incoming information could result in qualia. Just because we cannot imagine them doesn't mean they cannot be someone's subjective experience
◧◩◪◨
73. august+Esq[view] [source] [discussion] 2023-11-26 04:07:09
>>calf+io
https://nlp.seas.harvard.edu/2018/04/03/attention.html
[go to top]