zlacker

[parent] [thread] 38 comments
1. Rugged+(OP)[view] [source] 2023-11-18 07:43:48
It's mostly a thing among the youngs I feel. Anybody old enough to remember the same 'OMG its going to change the world' cycles around AI every two or three decades knows better. The field is not actually advancing. It still wrestles with the same fundamental problems they were doing in the early 60s. The only change is external, where computer power gains and data set size increases allow brute forcing problems.
replies(7): >>Eji170+cc >>concor+Kf >>Adunai+uu >>torgin+Xu >>hypert+LB >>fsloth+FS >>antifa+WT1
2. Eji170+cc[view] [source] 2023-11-18 09:31:50
>>Rugged+(OP)
I'd say the biggest change is the quantity of available CATEGORIZED data. Tagged images and what not has done a ton to help the field.

Further there are some hybrid chips which might help increase computing power specifically for the matrix math that all these systems work on.

But yeah, none of this is making what people talk about when they say AGI. Just like how some tech cult people felt that Level 5 self driving was around the corner, even with all the evidence to the contrary.

The self driving we have (or really, assisted cruise control) IS impressive, and leagues ahead of what we could do even a decade or two ago, but the gulf between that, and the goal, is similar to GPT and AGI in my eyes.

There are a lot of fundamental problems we still don't have answers to. We've just gotten a lot better at doing what we already did, and getting more conformity on how.

3. concor+Kf[view] [source] 2023-11-18 10:03:49
>>Rugged+(OP)
> The field is not actually advancing.

Uh, what do you mean by this? Are you trying to draw a fundamental science vs engineering distinction here?

Because today's LLMs definitely have capabilities we previously didn't have.

replies(1): >>oska+Uh
◧◩
4. oska+Uh[view] [source] [discussion] 2023-11-18 10:20:45
>>concor+Kf
They don't have 'artificial intelligence' capabilities (and never will).

But it is an interesting technology.

replies(1): >>concor+pi
◧◩◪
5. concor+pi[view] [source] [discussion] 2023-11-18 10:24:11
>>oska+Uh
They can be the core part of a system that can do a junior dev's job.

Are you defining "artificial intelligence" is some unusual way?

replies(2): >>oska+bj >>hedora+Cm
◧◩◪◨
6. oska+bj[view] [source] [discussion] 2023-11-18 10:31:13
>>concor+pi
I'm defining intelligence in the usual way and intelligence requires understanding which is not possible without consciousness

I follow Roger Penrose's thinking here. [1]

[1] https://www.youtube.com/watch?v=2aiGybCeqgI&t=721s

replies(3): >>concor+al >>wilder+tt >>Zambyt+aN
◧◩◪◨⬒
7. concor+al[view] [source] [discussion] 2023-11-18 10:46:57
>>oska+bj
> intelligence requires understanding which is not possible without consciousness

How are you defining "consciousness" and "understanding" here? Because a feedback loop into an LLM would meet the most common definition of consciousness (possessing a phonetic loop). And having an accurate internal predictive model of a system is the normal definition of understanding and a good LLM has that too.

replies(1): >>Feepin+yO
◧◩◪◨
8. hedora+Cm[view] [source] [discussion] 2023-11-18 11:00:55
>>concor+pi
If by “junior dev”, you mean “a dev at a level so low they will be let go if not promoted”, then I agree.

I’ve watched my coworkers try to make use of LLMs at work, and it has convinced me the LLM’s contributions are well below the bar where their output is a net benefit to the team.

replies(2): >>raccoo+ar >>int_19+Cr2
◧◩◪◨⬒
9. raccoo+ar[view] [source] [discussion] 2023-11-18 11:35:38
>>hedora+Cm
It works pretty well in my C++ code. Context: modern C++ with few footguns, inside functions with pretty-self-explanatory names.

I don't really get the "low bar for contributions" argument because GH Copilot's contributions are too small-sized for there to even be any bar. It writes the obvious and tedious loops and other boilerplate so I can focus on what the code should actually do.

◧◩◪◨⬒
10. wilder+tt[view] [source] [discussion] 2023-11-18 11:52:44
>>oska+bj
It’s cool to see people recognizing this basic fact — consciousness is a prerequisite for intelligence. GPT is a philosophical zombie.
replies(1): >>bagofs+qW
11. Adunai+uu[view] [source] 2023-11-18 12:00:44
>>Rugged+(OP)
As an outsider, I can talk to AI and get more coherent responses than from humans (flawed, but it's getting better). That's tangible, that's an improvement. I for one don't even consider the Internet to be as revolutionary as the steam engine or freight trains. But AI is actually modifying my own life already - and that's far from the end.

P.S. I've just created this account here on Hacker News because Altman is one of the talking heads I've been listening to. Not too sure what to make of this. I'm an accelerationist, so my biggest fear is America stifling its research the same way it buried space exploration and human gene editing in the past. All hope is for China - but then again, the CCP might be even more fearful of non-human entities than the West. Stormy times indeed.

12. torgin+Xu[view] [source] 2023-11-18 12:02:51
>>Rugged+(OP)
LLMs have changed the world more profoundly than any technology in the past 2 decades, I'd argue.

The fact that we can communicate with computers using just natural language, and can query data, use powerful and complex tools just by describing what we want is an incredible breakthrough, and that's a very conservative use of the technology.

replies(3): >>foldr+Xv >>qetern+MA >>theobr+XA
◧◩
13. foldr+Xv[view] [source] [discussion] 2023-11-18 12:11:25
>>torgin+Xu
I don't actually see anything changing, though. There are cool demos, and LLMs can work effectively to enhance productivity for some tasks, but nothing feels fundamentally different. If LLMs were suddenly taken away I wouldn't particularly care. If the clock were turned back two decades, I'd miss wifi (only barely available in 2003) and smartphones with GPS.
replies(2): >>peigno+Xx >>FabHK+Gz
◧◩◪
14. peigno+Xx[view] [source] [discussion] 2023-11-18 12:22:59
>>foldr+Xv
You need time for inertia to happen, I’m working on some mvps now and it takes time to test what works what s possible what does not…
◧◩◪
15. FabHK+Gz[view] [source] [discussion] 2023-11-18 12:34:05
>>foldr+Xv
Indeed. The "Clamshell" iBook G3 [0] (aka Barbie's toilet seat), introduced 1999, had WiFi capabilities (as demonstrated by Phil Schiller jumping down onto the stage while online [1]), but IIRC, you had to pay extra for the optional Wifi card.

[0] https://en.wikipedia.org/wiki/IBook#iBook_G3_(%22Clamshell%2... [1] https://www.youtube.com/watch?v=1MR4R5LdrJw

◧◩
16. qetern+MA[view] [source] [discussion] 2023-11-18 12:42:28
>>torgin+Xu
I am massively bullish LLMs but this is hyperbole.

Smartphones changed day to day human life more profoundly than anything since the steam engine.

replies(1): >>torgin+F52
◧◩
17. theobr+XA[view] [source] [discussion] 2023-11-18 12:44:32
>>torgin+Xu
That breakthrough would not be possible without ubiquity of personal computing at home and in your pocket, though, which seems like the bigger change in the last two decades.
18. hypert+LB[view] [source] 2023-11-18 12:50:39
>>Rugged+(OP)
Deep learning was an advance. I think the fundamental achievement is a way to use all that parallel processing power and data. Inconceivable amounts of data can give seemingly magical results. Yes, overfitting and generalizing are still problems.

I basically agree with you about the 20 year hype-cycle, and but when compute power reaches parity with human brain hardware (Kurzweil predicts by about 2029), one barrier is removed.

replies(1): >>somewh+oX
◧◩◪◨⬒
19. Zambyt+aN[view] [source] [discussion] 2023-11-18 14:01:00
>>oska+bj
I think answering this may illuminate the division in schools of thought: do you believe life was created by a higher power?
replies(1): >>oska+uO
◧◩◪◨⬒⬓
20. oska+uO[view] [source] [discussion] 2023-11-18 14:08:56
>>Zambyt+aN
My beliefs aren't really important here but I don't believe in 'creation' (i.e. no life -> life); I believe that life has always existed
replies(2): >>concor+HR >>Zambyt+0q3
◧◩◪◨⬒⬓
21. Feepin+yO[view] [source] [discussion] 2023-11-18 14:09:05
>>concor+al
No, you're not supposed to actually have an empirical model of consciousness. "Consciousness" is just "that thing that computers don't have".
◧◩◪◨⬒⬓⬔
22. concor+HR[view] [source] [discussion] 2023-11-18 14:25:46
>>oska+uO
Now that is so rare I've never even heard of someone expressing that view before...

Materialists normally believe in a big bang (which has no life) and religious people normally think a higher being created the first life.

This is pretty fascinating, to you have a link explaining the religion/ideology/worldview you have?

replies(1): >>nprate+s11
23. fsloth+FS[view] [source] 2023-11-18 14:30:47
>>Rugged+(OP)
This time around they’ve actually come up with a real productizable piece of tech, though. I don’t care what it’s called, but I enjoy better automation to automate as much of the boring shit away. And chip in in coding when it’s bloody obvious from the context what the few lines of code will be.

So not an ”AI”, but closer to ”universal adaptor” or ”smart automation”.

Pretty nice in any case. And if true AI is possible, the automations enabled by this will probably be part of the narrative how we reach it (just like mundane things like standardized screws were part of the narrative of Apollo mission).

◧◩◪◨⬒⬓
24. bagofs+qW[view] [source] [discussion] 2023-11-18 14:51:08
>>wilder+tt
Problem is, we have no agreed-upon operational definition of consciousness. Arguably, it's the secular equivalent of the soul. Something everything believes they have, but which is not testable, locatable or definable.

But yet (just like with the soul) we're sure we have it, and it's impossible for anything else to have it. Perhaps consciousness is simply a hallucination that makes us feel special about ourselves.

replies(2): >>howrar+c51 >>wilder+wf1
◧◩
25. somewh+oX[view] [source] [discussion] 2023-11-18 14:57:34
>>hypert+LB
Human and computer hardware are not comparable, after all even with the latest chips the computer is just (many) von Neumann machine(s) operating on a very big (shared) tape. To model the human brain in such a machine would require the human brain to be discretizable, which, given its essentially biochemical nature, is not possible - certainly not by 2029.
replies(1): >>hypert+wB2
◧◩◪◨⬒⬓⬔⧯
26. nprate+s11[view] [source] [discussion] 2023-11-18 15:25:06
>>concor+HR
Buddhism
◧◩◪◨⬒⬓⬔
27. howrar+c51[view] [source] [discussion] 2023-11-18 15:47:58
>>bagofs+qW
You can't even know that other people have it. We just assume they do because they look and behave like us, and we know that we have it ourselves.
◧◩◪◨⬒⬓⬔
28. wilder+wf1[view] [source] [discussion] 2023-11-18 16:46:17
>>bagofs+qW
I disagree. There is a simple test for consciousness: empathy.

Empathy is the ability to emulate the contents of another consciousness.

While an agent could mimic empathetic behaviors (and words), given enough interrogation and testing you would encounter an out-of-training case that it would fail.

replies(2): >>concor+El1 >>int_19+Yr2
◧◩◪◨⬒⬓⬔⧯
29. concor+El1[view] [source] [discussion] 2023-11-18 17:14:27
>>wilder+wf1
Uh... so is it autistic people or non-autistic people who lack consciousness? (Generally autistic people emulate other autistic people better and non-autists emulate non-autists better)

> given enough interrogation and testing you would encounter an out-of-training case that it would fail.

This is also the case with regular humans.

30. antifa+WT1[view] [source] 2023-11-18 20:19:23
>>Rugged+(OP)
> Anybody old enough to remember the same 'OMG its going to change the world' cycles around AI every two or three decades

Hype and announcements, sure, but this is the first time there's actually a product.

replies(1): >>dragon+xU1
◧◩
31. dragon+xU1[view] [source] [discussion] 2023-11-18 20:24:05
>>antifa+WT1
> Hype and announcements, sure, but this is the first time there's actually a product.

No, its not. Its just once the hype cycle dies down, we tend to stop calling the products of the last AI hype cycle "AI", we call them after the name of the more specific implementation technology (rules engines/expert systems being one of the older ones, for instance.)

And if this cycle hits a wall, maybe in 20 years we'll have LLMs and diffusion models, etc., embedded lots of places, but no one will call them alone "AI", and then the next hype cycle will have some new technology and we'll call that "AI" while the cycle is active...

◧◩◪
32. torgin+F52[view] [source] [discussion] 2023-11-18 21:28:13
>>qetern+MA
I'm kinda curious as to why you think that's the case. I mean, smartphones are nice, and having a browser, chat client, camera etc. in my pocket is nice, but maybe I have been terminally screen-bound all my life, but I could do almost all those things on my PC before, and I could always call folks when on the go.

I've never experienced the massively life changing effects of having a smartphone, and (thankfully) none of my friends seem to be those people who are always looking at their phones.

replies(1): >>331c8c+XN3
◧◩◪◨⬒
33. int_19+Cr2[view] [source] [discussion] 2023-11-18 23:27:38
>>hedora+Cm
Conversely, I was very skeptical of its ability to help coding something non-trivial. Then I found out that the more readable your code is - in a very human way, like descriptive identifiers, comments etc - the better this "smart autocomplete" is. It's certainly good enough to save me a lot of typing, so it is a net benefit.
◧◩◪◨⬒⬓⬔⧯
34. int_19+Yr2[view] [source] [discussion] 2023-11-18 23:29:58
>>wilder+wf1
For one thing, this would imply that clinical psychopaths aren't conscious, which would be a very weird takeaway.

But also, how do you know that LMs aren't empathic? By your own admission they do "mimic empathetic behaviors", but you reject this as the real thing because you claim that with enough testing you would encounter a failure. This raises all kinds of "no true Scotsman" flags, not to mention that empathy failure is not exactly uncommon among humans. So how exactly do you actually test your hypothesis?

replies(1): >>wilder+GH4
◧◩◪
35. hypert+wB2[view] [source] [discussion] 2023-11-19 00:17:07
>>somewh+oX
It depends on the resolution of discretization required. Kurzweil's prediction is premised on his opinion of this.

Note that engineering fluid simulation (cfd) makes these choices in discretization of pde's all the time, based on application requirements.

◧◩◪◨⬒⬓⬔
36. Zambyt+0q3[view] [source] [discussion] 2023-11-19 06:32:51
>>oska+uO
Do you believe:

1) Earth has an infinite past that has always included life

2) The Earth as a planet has a finite past, but it (along with what made up the Earth) is in some sense alive, and life as we know it emerged from that life

3) The Earth has a finite past, and life has transferred to Earth from somewhere else in space

4) We are the Universe, and the Universe is alive

Or something else? I will try to tie it back to computers after this short intermission :)

◧◩◪◨
37. 331c8c+XN3[view] [source] [discussion] 2023-11-19 10:27:07
>>torgin+F52
While many technologies provided by the smartphone were indeed not novel the cumulative effect of having a constant access to them and their subsequent normalization is nothing short of revolutionary.

For instance, I remember the time when chatting online (even with people you knew offline) was considered to be a nerdy activity. Then it gradually became more mainstream and now it's the norm to do it and a lot of people do it multiple times per day. This fundamentally changes how people interact with each other.

Another example is dating. Not that I have personal experience with modern online dating (enabled by smartphones) but what I read is disturbing and captivating at the same time e.g. apparent normalization of "ghosting"...

◧◩◪◨⬒⬓⬔⧯▣
38. wilder+GH4[view] [source] [discussion] 2023-11-19 17:07:17
>>int_19+Yr2
Great point and great question! Yes, it does imply that people who lack the capacity for empathy (as opposed to those who do not utilize their capacity for empathy) may lack conscious experience. Empathy failure here means lacking the data empathy provides rather than ignoring the data empathy provides (which as you note, is common). I’ve got a few prompts that are somewhat promising in terms of clearly showing that GPT4 is unable to correctly predict human behavior driven by human empathy. The prompts are basic thought experiments where a person has two choices: an irrational yet empathic choice, and a rational yet non-empathic choice. GPT4 does not seem able to predict that smart humans do dumb things due to empathy, unless it is prompted with such a suggestion. If it had empathy itself, it would not need to be prompted about empathy.
replies(1): >>int_19+oO5
◧◩◪◨⬒⬓⬔⧯▣▦
39. int_19+oO5[view] [source] [discussion] 2023-11-19 22:19:37
>>wilder+GH4
Can you give some examples of such prompts?
[go to top]