zlacker

[parent] [thread] 219 comments
1. johnwh+(OP)[view] [source] 2023-11-18 02:36:00
Ilya booted him https://twitter.com/karaswisher/status/1725702501435941294
replies(5): >>dannyk+R1 >>yieldc+Un >>dwd+Fy >>painte+oF >>hitrad+nH
2. dannyk+R1[view] [source] 2023-11-18 02:50:37
>>johnwh+(OP)
This should be higher voted. Seems like an internal power struggle between the more academic types and the commercial minded sides of OpenAI.

I bet Sam goes and founds a company to take on OpenAI…and wins.

replies(8): >>thomas+U4 >>adharm+vd >>RyanSh+0f >>csomar+uq >>quickt+aw >>bitcha+oM >>croes+341 >>moogly+lh1
◧◩
3. thomas+U4[view] [source] [discussion] 2023-11-18 03:13:18
>>dannyk+R1
Yes, and wins with an inferior product. Hooray /s

If the company's 'Chief Scientist' is this unhappy about the direction the CEO is taking the company, maybe there's something to it.

replies(5): >>dmix+Rk >>pauldd+Ko >>lll-o-+wr >>adrr+ft >>quickt+Gw
◧◩
4. adharm+vd[view] [source] [discussion] 2023-11-18 04:17:42
>>dannyk+R1
From all accounts, Altman is a smart operator. So the whole story doesn’t make sense. Altman being the prime mover, doesn’t have sufficient traction with the board to protect his own position and allows a few non-techies to boot him out ?
replies(1): >>biddit+ws
◧◩
5. RyanSh+0f[view] [source] [discussion] 2023-11-18 04:29:08
>>dannyk+R1
The abrupt nature and accusatory tone of the letter makes it sound like more was going on than disagreement. Why not just say, “the board has made the difficult decision to part ways with Altman”?
replies(1): >>krick+Bv
◧◩◪
6. dmix+Rk[view] [source] [discussion] 2023-11-18 05:09:00
>>thomas+U4
You're putting a lot of trust in the power of one man, who easily could have the power to influence the three other board members. It's hard to know if this amounts more than a personal feud that escalated and then got wrapped in a pretty bow of "AI safety" and "non-profit vs profits".
7. yieldc+Un[view] [source] 2023-11-18 05:30:40
>>johnwh+(OP)
Today’s lesson, keep multiple board seats

None of the tech giants would be where they are today if they didn't ram through unique versions of control

Their boards or shareholders would have ousted every FAANG CEO at less palatable parts of the journey

replies(3): >>cedws+go >>100ide+Jo >>oivey+Xs
◧◩
8. cedws+go[view] [source] [discussion] 2023-11-18 05:34:35
>>yieldc+Un
This is a surprising advantage Zuckerberg has in manoeuvring Meta. At least, to my knowledge, he is still effectively dictator.
replies(1): >>100ide+Uo
◧◩
9. 100ide+Jo[view] [source] [discussion] 2023-11-18 05:38:09
>>yieldc+Un
This comment is tone-deaf to the unique (and effective? TBD) arrangement of the board OpenAI 501(c)3 without compensation and the company they regulate. Your comment strikes me as not appreciating the unusually civic-minded arrangement, at least superficially, that is enabling the current power play. Maybe read the boards letter more carefully and provide your reaction. You castigate them as “non-techies” - meaning… what?
replies(2): >>yieldc+Np >>arthur+jr
◧◩◪
10. pauldd+Ko[view] [source] [discussion] 2023-11-18 05:38:11
>>thomas+U4
Maybe.

But Altman has a great track record as CEO.

Hard to imagine he suddenly became a bad CEO. Possible. But unlikely.

replies(3): >>csomar+Sq >>maltha+Ox >>baq+6S
◧◩◪
11. 100ide+Uo[view] [source] [discussion] 2023-11-18 05:40:09
>>cedws+go
Dear god, how is that an advantage? Are we all here just rooting for techno-dictator supremacy?
replies(2): >>yieldc+2q >>manvil+Vq
◧◩◪
12. yieldc+Np[view] [source] [discussion] 2023-11-18 05:45:46
>>100ide+Jo
and the lesson the ousted ones learn for their next incarnation is to create organizations that allow for more control and more flexibility in board arrangements. I run a 501c3 as well, there are limitations in board composition in that entity type

nothing tone deaf about that, they wanted a for profit and are going to make one now and want leave the same vector open

Reread it as not being a comment about OpenAI it was about the lesson learned by every onlooker and the ousted execs

◧◩◪◨
13. yieldc+2q[view] [source] [discussion] 2023-11-18 05:47:52
>>100ide+Uo
its objectively an advantage in control. if thats a goal, then its effective at doing that

the only one inserting bias and emotion into objectivity here is you

◧◩
14. csomar+uq[view] [source] [discussion] 2023-11-18 05:49:54
>>dannyk+R1
I don't see it. Altman does not seem hacker-minded and likely will end up with an inferior product. This might be what led to this struggle. Sam is more about fundraising and getting the word out there but he should keep out of product decisions.
replies(2): >>deepGe+kA >>sujayk+pF
◧◩◪◨
15. csomar+Sq[view] [source] [discussion] 2023-11-18 05:52:01
>>pauldd+Ko
Where is this coming from? Sam does not have a "great" record as a CEO. In fact, he barely has any records. His fame came from working in YC and then the sky-rocketing of open AI. He is great at fundraising though.
replies(1): >>pauldd+ts
◧◩◪◨
16. manvil+Vq[view] [source] [discussion] 2023-11-18 05:52:43
>>100ide+Uo
since most public companies are owned by multi billion dollar hedgefunds, they're not exactly pillars of democracy. and since privately owned businesses are a thing; its really not that big of a deal
◧◩◪
17. arthur+jr[view] [source] [discussion] 2023-11-18 05:55:18
>>100ide+Jo
Tone deaf yet holds up under scrutiny
◧◩◪
18. lll-o-+wr[view] [source] [discussion] 2023-11-18 05:56:55
>>thomas+U4
Because the Chief Scientist let ideology overrule pragmatism. There is always a tension between technical and commercial. That’s a battle that should be fought daily, but never completely won.

This looks like a terrible decision, but I suppose we must wait and see.

replies(4): >>satvik+sw >>ytoaww+sx >>pmoria+Jv1 >>diogne+5V4
◧◩◪◨⬒
19. pauldd+ts[view] [source] [discussion] 2023-11-18 06:06:26
>>csomar+Sq
wat

the guy founded and was CEO of a company at 19 that sold for $43m

replies(3): >>csomar+6t >>comte7+et >>epolan+w21
◧◩◪
20. biddit+ws[view] [source] [discussion] 2023-11-18 06:06:44
>>adharm+vd
Well connected fundraiser - obviously.

But…smart operator? Based on what? What trials has he navigated through that displayed great operational skills? When did he steer a company through a rocky time?

◧◩
21. oivey+Xs[view] [source] [discussion] 2023-11-18 06:11:11
>>yieldc+Un
Seemingly there is this consensus of board members around a senior executive. It just isn’t the CEO.
◧◩◪◨⬒⬓
22. csomar+6t[view] [source] [discussion] 2023-11-18 06:12:10
>>pauldd+ts
> As CEO, Altman raised more than $30 million in venture capital for the company; however, Loopt failed to gain traction with enough users.

It is easy to sell a company for $43 if you raised at least $43. Granted, we don't know the total amount raised but it certainly it's not the big success you are describing. That and I already mentioned that he is good in corporate sales.

replies(3): >>pauldd+zu >>plinga+kw >>garden+eL1
◧◩◪◨⬒⬓
23. comte7+et[view] [source] [discussion] 2023-11-18 06:13:13
>>pauldd+ts
Ah yes the legendary social networking giant loopt
◧◩◪
24. adrr+ft[view] [source] [discussion] 2023-11-18 06:13:43
>>thomas+U4
Inferior product is better than an unreleased product.
replies(1): >>hk__2+9w
◧◩◪◨⬒⬓⬔
25. pauldd+zu[view] [source] [discussion] 2023-11-18 06:25:33
>>csomar+6t
> he is good in corporate sales

Which is a big part of being a great CEO

replies(2): >>csomar+Dw >>croes+O41
◧◩◪
26. krick+Bv[view] [source] [discussion] 2023-11-18 06:38:11
>>RyanSh+0f
> Why not just say, “the board has made the difficult decision to part ways with Altman”?

That's hardly any different. Nobody makes a difficult decision without any reason, and it's not like they really explained the reason.

replies(1): >>blabla+iO
◧◩◪◨
27. hk__2+9w[view] [source] [discussion] 2023-11-18 06:44:11
>>adrr+ft
Does ChatGPT look unreleased to you?
◧◩
28. quickt+aw[view] [source] [discussion] 2023-11-18 06:44:21
>>dannyk+R1
I bet not (we could bet with play money on manifold.markets I would bet to 10% probability). Because you need the talent, the chips, the IP development, the billions. He could get the money but the talent is going to be hard unless he has a great narrative.
replies(3): >>manees+Uw >>erhaet+mB >>taneq+3m1
◧◩◪◨⬒⬓⬔
29. plinga+kw[view] [source] [discussion] 2023-11-18 06:46:24
>>csomar+6t
According to Crunchbase, Loopt raised $39.1M.
replies(1): >>grumpl+AZ
◧◩◪◨
30. satvik+sw[view] [source] [discussion] 2023-11-18 06:47:34
>>lll-o-+wr
As long as truly "open" AI wins, as in fully open-source AI, then I'm fine with such a "leadership transition."
replies(1): >>aidama+5N
◧◩◪◨⬒⬓⬔⧯
31. csomar+Dw[view] [source] [discussion] 2023-11-18 06:49:07
>>pauldd+zu
It is a big part of start-up culture and getting seed liquidity. It doesn't make you a great long-term CEO, however.
◧◩◪
32. quickt+Gw[view] [source] [discussion] 2023-11-18 06:49:38
>>thomas+U4
You can’t win with an inferior product here. Not yet anyway. The utility is in the usefulness of the AI, and we’ve only just got to useful enough to start really being useful for daily workflows. This isn’t a ERP type thing where you outsell your rivals on sales prowess alone. This is more like the iPhone3 just got released.
◧◩◪
33. manees+Uw[view] [source] [discussion] 2023-11-18 06:51:55
>>quickt+aw
Isn't his narrative that he is basically the only person in the world who has already done this?
replies(2): >>wly_cd+by >>rasz+AD
◧◩◪◨
34. ytoaww+sx[view] [source] [discussion] 2023-11-18 06:57:03
>>lll-o-+wr
OpenAI is a non-profit research organisation.

It's for-profit (capped-profit) subsidiary exists solely to be able to enable competitive compensation to its researchers to ensure they don't have to worry about the opportunity costs of working at a non-profit.

They have a mutually beneficial relationship with a deep-pocketed partner who can perpetually fund their research in exchange for exclusive rights to commercialize any ground-breaking technology they develop and choose to allow to be commercialized.

Aggressive commercialization is at odds with their raison d'être and they have no need for it to fund their research. For as long as they continue to push forward the state of the art in AI and build ground-breaking technology they can let Microsoft worry about commercialization and product development.

If a CEO is not just distracting but actively hampering an organisation's ability to fulfill its mission then their dismissal is entirely warranted.

replies(4): >>Rugged+zC >>fuzzte+qI >>dagaci+iL >>mikoto+Ww1
◧◩◪◨
35. maltha+Ox[view] [source] [discussion] 2023-11-18 06:59:55
>>pauldd+Ko
or alternatively: altman has the ability to leverage his network to fail upwards

let's see if he can pull it off again or goes all-in on his data privacy nightmare / shitcoin double-wammy

replies(1): >>6510+vA
◧◩◪◨
36. wly_cd+by[view] [source] [discussion] 2023-11-18 07:03:53
>>manees+Uw
No, Sutskever and colleagues did it. Sam sold it. Which is a lot, but is not doing it.
37. dwd+Fy[view] [source] 2023-11-18 07:07:59
>>johnwh+(OP)
Jeremy Howard called ngmi on OpenAI during the Vanishing Gradients podcast yesterday, and Ilya has probably been thinking the same: LLM is a dead-end and not the path to AGI.

https://twitter.com/HamelHusain/status/1725655686913392933

replies(6): >>Sebgue+2z >>erhaet+7B >>MattRi+hB >>garden+TG >>Alchem+iN >>tarrud+GO
◧◩
38. Sebgue+2z[view] [source] [discussion] 2023-11-18 07:11:18
>>dwd+Fy
This is the reverse of their apparent differences, at least as stated elsewhere in the comments.
◧◩◪
39. deepGe+kA[view] [source] [discussion] 2023-11-18 07:23:36
>>csomar+uq
Brockman is with Sam, which makes them a formidable duo. Should they choose to, they will offer stiff competition to OpenAI but they may not even want to compete.
replies(2): >>kazama+1B >>csomar+jL
◧◩◪◨⬒
40. 6510+vA[view] [source] [discussion] 2023-11-18 07:24:57
>>maltha+Ox
Train a LLM exclusively on HN and make it into a serial killer app generator.
replies(1): >>aku286+dG
◧◩◪◨
41. kazama+1B[view] [source] [discussion] 2023-11-18 07:30:02
>>deepGe+kA
For a company to be as successful as OpenAI, two people won't cut it. OpenAI arguably has the best ML talent at the moment. Talent attracts talent. People come for Sutskever, Karpathy, and alike -- not for Altman or Brockman.
replies(4): >>ignora+EE >>aidama+1N >>snordg+cN >>mv4+LH1
◧◩
42. erhaet+7B[view] [source] [discussion] 2023-11-18 07:31:39
>>dwd+Fy
Did we ever think LLMs were a path to AGI...? AGI is friggin hard, I don't know why folks keep getting fooled whenever a bot writes a coherent sentence.
replies(8): >>Rugged+fC >>Closi+1D >>mjan22+AI >>andrep+zN >>concor+CR >>golol+DS >>heavys+ye1 >>discor+rv1
◧◩
43. MattRi+hB[view] [source] [discussion] 2023-11-18 07:33:08
>>dwd+Fy
This is not the reason Ilya did it. Also the rest of that guy’s comments were just really poorly thought out. OpenAI had to temporarily stop sign ups because of demand and somehow he thinks that’s a bad thing? Absurd.

That guy has no sense of time, of how fast this stuff has actually been moving.

replies(2): >>ignora+iE >>Alexan+IF
◧◩◪
44. erhaet+mB[view] [source] [discussion] 2023-11-18 07:33:53
>>quickt+aw
I'll sell my soul for about $600K/yr. Can't say I'm at the top of the AI game but I did graduate with a "concentration in AI" if that counts for anything.
replies(2): >>xvecto+DF >>JohnFe+om1
◧◩◪
45. Rugged+fC[view] [source] [discussion] 2023-11-18 07:43:48
>>erhaet+7B
It's mostly a thing among the youngs I feel. Anybody old enough to remember the same 'OMG its going to change the world' cycles around AI every two or three decades knows better. The field is not actually advancing. It still wrestles with the same fundamental problems they were doing in the early 60s. The only change is external, where computer power gains and data set size increases allow brute forcing problems.
replies(7): >>Eji170+rO >>concor+ZR >>Adunai+J61 >>torgin+c71 >>hypert+0e1 >>fsloth+Uu1 >>antifa+bw2
◧◩◪◨⬒
46. Rugged+zC[view] [source] [discussion] 2023-11-18 07:46:33
>>ytoaww+sx
Even a non-profit needs to focus on profitability, otherwise it's not going to exist for very long. All 'non-profit' means is it's prohibited from distributing its profit to shareholders. Ownership of a non-profit doesn't pay you. The non-profit itself still wants and is trying to generate more then it spends.
replies(1): >>ytoaww+4D
◧◩◪
47. Closi+1D[view] [source] [discussion] 2023-11-18 07:49:56
>>erhaet+7B
Mainly because LLMs have so far basically passed every formal test of ‘AGI’ including totally smashing the Turing test.

Now we are just reliant on ‘I’ll know it when I see it’.

LLMs as AGI isn’t about looking at the mechanics and trying to see if we think that could cause AGI - it’s looking at the tremendous results and success.

replies(5): >>garden+gH >>peyton+EL >>drsopp+T01 >>ChatGT+Ri1 >>strahl+9j1
◧◩◪◨⬒⬓
48. ytoaww+4D[view] [source] [discussion] 2023-11-18 07:50:11
>>Rugged+zC
I addressed that concern in my third paragraph.
◧◩◪◨
49. rasz+AD[view] [source] [discussion] 2023-11-18 07:55:04
>>manees+Uw
this being bait and switched actual scientists implementing the thing under the guise of non-profit?
◧◩◪
50. ignora+iE[view] [source] [discussion] 2023-11-18 08:01:17
>>MattRi+hB
I mean, let's not jump to conclusions. Everyone involved are formidable in their own right, except one or two independent board members Ilya was able to convince.
◧◩◪◨⬒
51. ignora+EE[view] [source] [discussion] 2023-11-18 08:04:47
>>kazama+1B
Pachocki, Director of Research, just quit: >>38316378

Real chance of an exodus, which will be an utter shame.

52. painte+oF[view] [source] 2023-11-18 08:11:47
>>johnwh+(OP)
Elon Musk was talking about his view on OpenAI and especially the role of Ilya just 8 days ago on Lex Friedman Podcast.

Listening to it again now, it feels like he might have know what is going on:

https://youtu.be/JN3KPFbWCy8?si=WnCdW45ccDOb3jgb&t=5100

Edit: Especially this part: "It was created as a non-profit open source and now it is a closed-source for maximum profit... Which I think is not good carma... ..."

https://youtu.be/JN3KPFbWCy8?si=WnCdW45ccDOb3jgb&t=5255

replies(2): >>Lacerd+gR >>thoman+La2
◧◩◪
53. sujayk+pF[view] [source] [discussion] 2023-11-18 08:11:49
>>csomar+uq
Maybe now he'll focus on worldcoin instead?
◧◩◪◨
54. xvecto+DF[view] [source] [discussion] 2023-11-18 08:13:36
>>erhaet+mB
That is "normal"/low-end IC6 pay at a tech company, the ML researchers involved here are pulling well into the millions.
replies(1): >>fyokdr+9V
◧◩◪
55. Alexan+IF[view] [source] [discussion] 2023-11-18 08:14:20
>>MattRi+hB
"That guy" has a pretty good idea when it comes to NLP

https://arxiv.org/abs/1801.06146

replies(1): >>Lacerd+kQ
◧◩◪◨⬒⬓
56. aku286+dG[view] [source] [discussion] 2023-11-18 08:18:03
>>6510+vA
This. I would like my serial killer to say some profound shit before he kills me.
replies(1): >>fyokdr+KU
◧◩
57. garden+TG[view] [source] [discussion] 2023-11-18 08:25:37
>>dwd+Fy
Nonsense really
◧◩◪◨
58. garden+gH[view] [source] [discussion] 2023-11-18 08:28:47
>>Closi+1D
Since ChatGPT is not indistinguishable from a human during a chat, is it fair to say it smashes the Turing test? Or do you mean something different?
replies(3): >>rayeig+fM >>NoOn3+qM >>aidama+vM
59. hitrad+nH[view] [source] 2023-11-18 08:29:35
>>johnwh+(OP)
This video dropped 2 weeks ago: https://www.youtube.com/watch?v=9iqn1HhFJ6c

Ilya clearly has a different approach to Sam

◧◩◪◨⬒
60. fuzzte+qI[view] [source] [discussion] 2023-11-18 08:38:54
>>ytoaww+sx
>They have a mutually beneficial relationship with a deep-pocketed partner who can perpetually fund their research in exchange for exclusive rights to commercialize any ground-breaking technology they develop and choose to allow to be commercialized.

Isn't this already a conflict of interest, or a clash, with this:

>OpenAI is a non-profit research organisation.

?

replies(1): >>logifa+KL
◧◩◪
61. mjan22+AI[view] [source] [discussion] 2023-11-18 08:40:42
>>erhaet+7B
> Estimated on the basis of five subtests, the Verbal IQ of the ChatGPT was 155
◧◩◪◨⬒
62. dagaci+iL[view] [source] [discussion] 2023-11-18 09:03:55
>>ytoaww+sx
It seems Microsoft was totally blind-sided by this event. If true then Trillion$+ Microsoft will now be scruitinizing the unpredictability and organizational risk associated with being dependant on the "unknown-random" + powrerful + passionate Illya and board who are vehemently opposed to the trajectory lead by altman. One solution would be to fork OpenAI and its efforts, one side with the vision lead by Illya and the other Sam.
replies(1): >>nprate+0F1
◧◩◪◨
63. csomar+jL[view] [source] [discussion] 2023-11-18 09:03:56
>>deepGe+kA
Well good thing we are in an open economy where anyone can start his own AI thing and no one wants to prevent him from doing that… I hope you see the /s.
replies(1): >>baq+9R
◧◩◪◨
64. peyton+EL[view] [source] [discussion] 2023-11-18 09:06:41
>>Closi+1D
It’s trivial to trip up chat LLMs. “What is the fourth word of your answer?”
replies(6): >>ben_w+DM >>concor+4S >>Lio+aZ >>tiahur+ox1 >>dudein+wD1 >>Closi+UWb
◧◩◪◨⬒⬓
65. logifa+KL[view] [source] [discussion] 2023-11-18 09:07:27
>>fuzzte+qI
> ?

"OpenAI is a non-profit artificial intelligence research company"

https://openai.com/blog/introducing-openai

◧◩◪◨⬒
66. rayeig+fM[view] [source] [discussion] 2023-11-18 09:11:32
>>garden+gH
Did you perhaps mean to say not distinguishable?
◧◩
67. bitcha+oM[view] [source] [discussion] 2023-11-18 09:13:15
>>dannyk+R1
I have no problem with getting rid of people obsessed with profits and shareholder gains. Those MBA types never deliver any value except for the investors.
◧◩◪◨⬒
68. NoOn3+qM[view] [source] [discussion] 2023-11-18 09:13:35
>>garden+gH
ChatGPT is distinguishable from a human, because ChatGPT never responds "I don't know.", at least not yet. :)
replies(5): >>ben_w+3N >>NoOn3+bV >>epolan+a21 >>raccoo+M31 >>int_19+a53
◧◩◪◨⬒
69. aidama+vM[view] [source] [discussion] 2023-11-18 09:13:50
>>garden+gH
not yet: https://arxiv.org/abs/2310.20216

that being said, it is highly intelligent, capable of reasoning as well as a human, and passes IQ tests like GMAT and GRE at levels like the 97th percentile.

most people who talk about Chat GPT don't even realize that GPT 4 exists and is orders of magnitude more intelligent than the free version.

replies(2): >>jwestb+oT >>hedora+jZ
◧◩◪◨⬒
70. ben_w+DM[view] [source] [discussion] 2023-11-18 09:15:23
>>peyton+EL
got-3.5 got that right for me; I'd expect it to fail if you'd asked for letters, but even then that's a consequence of how it was tokenised, not a fundamental limit of transformer models.
replies(1): >>rezona+SN
◧◩◪◨⬒
71. aidama+1N[view] [source] [discussion] 2023-11-18 09:19:01
>>kazama+1B
according to one of the researchers who left, Simon, the engineering piece is more important. and many of their best engineers leading GPT5 and ChatGPT left (Brockman, Pachocki, and Simon)
replies(1): >>rattra+DG2
◧◩◪◨⬒⬓
72. ben_w+3N[view] [source] [discussion] 2023-11-18 09:19:16
>>NoOn3+qM
It can do: https://chat.openai.com/share/f1c0726f-294d-447d-a3b3-f664dc...

IMO the main reason it's distinguishable is because it keeps explicitly telling you it's an AI.

replies(3): >>rezona+dO >>NoOn3+NO >>peigno+Ca1
◧◩◪◨⬒
73. aidama+5N[view] [source] [discussion] 2023-11-18 09:19:42
>>satvik+sw
this absolutely will not happen, Ilya is against it
replies(1): >>baq+FR
◧◩◪◨⬒
74. snordg+cN[view] [source] [discussion] 2023-11-18 09:20:25
>>kazama+1B
Money also attracts talent. An OpenAI competitor led by the people who led OpenAI to its leading position should be able to raise a lot of money.
replies(1): >>jk20+B81
◧◩
75. Alchem+iN[view] [source] [discussion] 2023-11-18 09:20:57
>>dwd+Fy
He's since reversed his call: https://twitter.com/jeremyphoward/status/1725714720400068752
replies(1): >>croes+L31
◧◩◪
76. andrep+zN[view] [source] [discussion] 2023-11-18 09:22:56
>>erhaet+7B
Are you kidding? Have you seen the reactions since ChatGPT was released, including in this very website? You'd think The Singularity is just around the corner!
◧◩◪◨⬒⬓
77. rezona+SN[view] [source] [discussion] 2023-11-18 09:25:01
>>ben_w+DM
This sort of test has been my go-to trip up for LLMs, and 3.5 fails quite often. 4 has been as bad as 3.5 in the past but recently has been doing better.
replies(1): >>yallne+7q1
◧◩◪◨⬒⬓⬔
78. rezona+dO[view] [source] [discussion] 2023-11-18 09:28:55
>>ben_w+3N
This isn't the same thing. This is a commanded recital of a lack of capability, not that its confidence in it's answer is low. For a type of question the GPT _could_ answer, most of the time it _will_ answer, regardless of accuracy
◧◩◪◨
79. blabla+iO[view] [source] [discussion] 2023-11-18 09:30:12
>>krick+Bv
It is a very big difference to publicly blame your now ex-CEO for basically lying ("not consistently candid") versus just a polite parting message based on personal differences or whatever. To attribute direct blame to Sam like this, something severe must have happened. You only do it like this to your ex-CEO when you are very pissed.
◧◩◪◨
80. Eji170+rO[view] [source] [discussion] 2023-11-18 09:31:50
>>Rugged+fC
I'd say the biggest change is the quantity of available CATEGORIZED data. Tagged images and what not has done a ton to help the field.

Further there are some hybrid chips which might help increase computing power specifically for the matrix math that all these systems work on.

But yeah, none of this is making what people talk about when they say AGI. Just like how some tech cult people felt that Level 5 self driving was around the corner, even with all the evidence to the contrary.

The self driving we have (or really, assisted cruise control) IS impressive, and leagues ahead of what we could do even a decade or two ago, but the gulf between that, and the goal, is similar to GPT and AGI in my eyes.

There are a lot of fundamental problems we still don't have answers to. We've just gotten a lot better at doing what we already did, and getting more conformity on how.

◧◩
81. tarrud+GO[view] [source] [discussion] 2023-11-18 09:33:55
>>dwd+Fy
Did he say GPT-4 API costs OpenAI $3/token?
replies(2): >>danpal+aR >>invali+GR
◧◩◪◨⬒⬓⬔
82. NoOn3+NO[view] [source] [discussion] 2023-11-18 09:34:19
>>ben_w+3N
I just noticed that when I ask really difficult technical questions, but for which there is an exact answer, It often tries to answer plausibly, but incorrectly instead of answering "I don't know". But over time, It becomes smarter and there are fewer and fewer such questions...
replies(2): >>ben_w+6P >>davegu+633
◧◩◪◨⬒⬓⬔⧯
83. ben_w+6P[view] [source] [discussion] 2023-11-18 09:37:17
>>NoOn3+NO
Have you tried setting a custom instruction in settings? I find that setting helps, albeit with weaker impact than the prompt itself.
replies(1): >>NoOn3+gZ
◧◩◪◨
84. Lacerd+kQ[view] [source] [discussion] 2023-11-18 09:48:14
>>Alexan+IF
expertise in one area often leads people to believe they are experts for everything else too
replies(1): >>baobab+P81
◧◩◪◨⬒
85. baq+9R[view] [source] [discussion] 2023-11-18 09:57:11
>>csomar+jL
Literally ask around for a billion dollars, how hard can it be?
◧◩◪
86. danpal+aR[view] [source] [discussion] 2023-11-18 09:57:13
>>tarrud+GO
No. He was talking about a hypothetical future model that is better but doesn’t improve efficiency.
◧◩
87. Lacerd+gR[view] [source] [discussion] 2023-11-18 09:57:39
>>painte+oF
Musk is just salty he is out of the game
replies(1): >>painte+3Y
◧◩◪
88. concor+CR[view] [source] [discussion] 2023-11-18 10:01:02
>>erhaet+7B
LLMs definitely aren't a path to ASI, but I'm a bit more optimistic than I was that they're the hardest component in an AGI.
◧◩◪◨⬒⬓
89. baq+FR[view] [source] [discussion] 2023-11-18 10:01:24
>>aidama+5N
Yeah if you think a misused AGI is like a misused nuclear weapon, you might think it’s a bad idea to share the recipe for either.
◧◩◪
90. invali+GR[view] [source] [discussion] 2023-11-18 10:01:28
>>tarrud+GO
He was saying that if OpenAI was to spend $100 billion on training it would cost $3 a token. I think it's hyperbole, but basically what he is saying is that it's difficult for the company to grow because the tech is limited by the training costs
◧◩◪◨
91. concor+ZR[view] [source] [discussion] 2023-11-18 10:03:49
>>Rugged+fC
> The field is not actually advancing.

Uh, what do you mean by this? Are you trying to draw a fundamental science vs engineering distinction here?

Because today's LLMs definitely have capabilities we previously didn't have.

replies(1): >>oska+9U
◧◩◪◨⬒
92. concor+4S[view] [source] [discussion] 2023-11-18 10:04:39
>>peyton+EL
How well does that work on humans?
replies(1): >>Loughl+qb1
◧◩◪◨
93. baq+6S[view] [source] [discussion] 2023-11-18 10:05:11
>>pauldd+Ko
Worldcoin is a great success for sure…!

The dude is quite good at selling dystopian ideas as a path to utopia.

◧◩◪
94. golol+DS[view] [source] [discussion] 2023-11-18 10:09:27
>>erhaet+7B
LLMs are the first instance of us having created some sort of general AI. I don't mean AGI, but general AI as in not specific AI. Before LLMs the problem eith AI was always that it "can only do one thing well". Now we have something on the other side: AI that can do anything but nothing specific particularly well. This is a fundamental advancement which makes AGI actually imaginable. Before LLMs there was literally no realistic plan how to build general intelligence.
replies(1): >>stuaxo+k01
◧◩◪◨⬒⬓
95. jwestb+oT[view] [source] [discussion] 2023-11-18 10:15:04
>>aidama+vM
Answers in Progress had a great video[0] where one of their presenters tested against an LLM in five different types of intelligence. tl;dr, AI was worlds ahead on two of the five, and worlds behind on the other three. Interesting stuff -- and clear that we're not as close to AGI as some of us might have thought earlier this year, but probably closer than a lot of the naysayers think.

0. https://www.youtube.com/watch?v=QrSCwxrLrRc

◧◩◪◨⬒
96. oska+9U[view] [source] [discussion] 2023-11-18 10:20:45
>>concor+ZR
They don't have 'artificial intelligence' capabilities (and never will).

But it is an interesting technology.

replies(1): >>concor+EU
◧◩◪◨⬒⬓
97. concor+EU[view] [source] [discussion] 2023-11-18 10:24:11
>>oska+9U
They can be the core part of a system that can do a junior dev's job.

Are you defining "artificial intelligence" is some unusual way?

replies(2): >>oska+qV >>hedora+RY
◧◩◪◨⬒⬓⬔
98. fyokdr+KU[view] [source] [discussion] 2023-11-18 10:25:10
>>aku286+dG
"should have rewritten it in rust" bang
◧◩◪◨⬒
99. fyokdr+9V[view] [source] [discussion] 2023-11-18 10:28:03
>>xvecto+DF
your comment is close to dead, when you talk public open facts.

shows that the demographic here is alienated when it came to their own compensation market value.

replies(2): >>hcks+f61 >>garden+gM1
◧◩◪◨⬒⬓
100. NoOn3+bV[view] [source] [discussion] 2023-11-18 10:28:50
>>NoOn3+qM
Maybe It's because It was never rewarded for such answers when It was learning.
◧◩◪◨⬒⬓⬔
101. oska+qV[view] [source] [discussion] 2023-11-18 10:31:13
>>concor+EU
I'm defining intelligence in the usual way and intelligence requires understanding which is not possible without consciousness

I follow Roger Penrose's thinking here. [1]

[1] https://www.youtube.com/watch?v=2aiGybCeqgI&t=721s

replies(3): >>concor+pX >>wilder+I51 >>Zambyt+pp1
◧◩◪◨⬒⬓⬔⧯
102. concor+pX[view] [source] [discussion] 2023-11-18 10:46:57
>>oska+qV
> intelligence requires understanding which is not possible without consciousness

How are you defining "consciousness" and "understanding" here? Because a feedback loop into an LLM would meet the most common definition of consciousness (possessing a phonetic loop). And having an accurate internal predictive model of a system is the normal definition of understanding and a good LLM has that too.

replies(1): >>Feepin+Nq1
◧◩◪
103. painte+3Y[view] [source] [discussion] 2023-11-18 10:53:03
>>Lacerd+gR
Yeah, but I find his expression and pause after "bad karma" sentence quite interesting with this new context
◧◩◪◨⬒⬓⬔
104. hedora+RY[view] [source] [discussion] 2023-11-18 11:00:55
>>concor+EU
If by “junior dev”, you mean “a dev at a level so low they will be let go if not promoted”, then I agree.

I’ve watched my coworkers try to make use of LLMs at work, and it has convinced me the LLM’s contributions are well below the bar where their output is a net benefit to the team.

replies(2): >>raccoo+p31 >>int_19+R33
◧◩◪◨⬒
105. Lio+aZ[view] [source] [discussion] 2023-11-18 11:03:23
>>peyton+EL
I find GPT-3.5 can be tripped up by just asking it to not to mention the words "apologize" or "January 2022" in its answer.

It immediately apologises and tells you it doesn't know anything after January 2022.

Compared to GPT-4 GPT-3.5 is just a random bullshit generator.

◧◩◪◨⬒⬓⬔⧯▣
106. NoOn3+gZ[view] [source] [discussion] 2023-11-18 11:03:56
>>ben_w+6P
It's not a problem for me. It's good that I can detect chatGPT by this sign.
◧◩◪◨⬒⬓
107. hedora+jZ[view] [source] [discussion] 2023-11-18 11:04:18
>>aidama+vM
That’s just showing the tests are measuring specific things that LLMs can game particularly well.

Computers have been able to smash high school algebra tests since the 1970’s, but that doesn’t make them as smart as a 16 year old (or even a three year old).

◧◩◪◨⬒⬓⬔⧯
108. grumpl+AZ[view] [source] [discussion] 2023-11-18 11:06:43
>>plinga+kw
How many years did it take to go from 39 million to 43 million in value? Would've been better off in bonds, perhaps.

This isn't a success story, it's a redistribution of wealth from investors to the founders.

replies(1): >>hambur+lq1
◧◩◪◨
109. stuaxo+k01[view] [source] [discussion] 2023-11-18 11:13:26
>>golol+DS
LLMs are not any kind of intelligence, but it can work to augment intelligence.
replies(3): >>darker+D61 >>cjonas+zM1 >>skohan+tZ1
◧◩◪◨
110. drsopp+T01[view] [source] [discussion] 2023-11-18 11:17:32
>>Closi+1D
I disagree about the claim that any LLM has beaten the Turing test. Do you have a source for this? Has there been an actual Turing test according to the standard interpretation of Turings paper? Making ChatGPT 4 respond in a non human way right now is trivial: "Write 'A', then wait one minute and then write 'B'".
replies(2): >>int_19+E43 >>Closi+U74
◧◩◪◨⬒⬓
111. epolan+a21[view] [source] [discussion] 2023-11-18 11:26:39
>>NoOn3+qM
Of course it does.
◧◩◪◨⬒⬓
112. epolan+w21[view] [source] [discussion] 2023-11-18 11:28:29
>>pauldd+ts
Loopt was not a successful company, it sold for more or less the same capital it raised.
◧◩◪◨⬒⬓⬔⧯
113. raccoo+p31[view] [source] [discussion] 2023-11-18 11:35:38
>>hedora+RY
It works pretty well in my C++ code. Context: modern C++ with few footguns, inside functions with pretty-self-explanatory names.

I don't really get the "low bar for contributions" argument because GH Copilot's contributions are too small-sized for there to even be any bar. It writes the obvious and tedious loops and other boilerplate so I can focus on what the code should actually do.

◧◩◪
114. croes+L31[view] [source] [discussion] 2023-11-18 11:38:14
>>Alchem+iN
Because of Altman's dismissal?
replies(1): >>ayewo+k61
◧◩◪◨⬒⬓
115. raccoo+M31[view] [source] [discussion] 2023-11-18 11:38:21
>>NoOn3+qM
Some humans also never respond "I don't know" even when they don't know. I know people who out-hallucinate LLMs when pressed to think rigorously
◧◩
116. croes+341[view] [source] [discussion] 2023-11-18 11:39:58
>>dannyk+R1
>I bet Sam goes and founds a company to take on OpenAI…and wins.

How? Training sources are much more restricted know.

◧◩◪◨⬒⬓⬔⧯
117. croes+O41[view] [source] [discussion] 2023-11-18 11:45:57
>>pauldd+zu
A CEO should lead a company not sell it.
◧◩◪◨⬒⬓⬔⧯
118. wilder+I51[view] [source] [discussion] 2023-11-18 11:52:44
>>oska+qV
It’s cool to see people recognizing this basic fact — consciousness is a prerequisite for intelligence. GPT is a philosophical zombie.
replies(1): >>bagofs+Fy1
◧◩◪◨⬒⬓
119. hcks+f61[view] [source] [discussion] 2023-11-18 11:56:24
>>fyokdr+9V
People here love to pretend 100k is an outstanding overpay
◧◩◪◨
120. ayewo+k61[view] [source] [discussion] 2023-11-18 11:57:07
>>croes+L31
Yes, along with the departure of gdb. From jph's view, there was no philosophical alignment at the start of the union between AI Researchers (that skew non-profit) and operators (that skew for-profit) so it was bound to be unstable, until a purging happens as it had now.

> Everything I'd heard about those 3 [Elon Musk, sama and gdb] was that they were brilliant operators and that they did amazing work. But it felt likely to be a huge culture shock on all sides.

> But the company absolutely blossomed nonetheless.

> With the release of Codex, however, we had the first culture clash that was beyond saving: those who really believed in the safety mission were horrified that OAI was releasing a powerful LLM that they weren't 100% sure was safe. The company split, and Anthropic was born.

> My guess is that watching the keynote would have made the mismatch between OpenAI's mission and the reality of its current focus impossible to ignore. I'm sure I wasn't the only one that cringed during it.

> I think the mismatch between mission and reality was impossible to fix.

jph goes on in detail in this Twitter thread: https://twitter.com/jeremyphoward/status/1725714720400068752

replies(1): >>civili+Oj1
◧◩◪◨⬒
121. darker+D61[view] [source] [discussion] 2023-11-18 11:59:49
>>stuaxo+k01
How smart would any human be without training and source material?
replies(2): >>knicho+Hl1 >>Jensso+mN1
◧◩◪◨
122. Adunai+J61[view] [source] [discussion] 2023-11-18 12:00:44
>>Rugged+fC
As an outsider, I can talk to AI and get more coherent responses than from humans (flawed, but it's getting better). That's tangible, that's an improvement. I for one don't even consider the Internet to be as revolutionary as the steam engine or freight trains. But AI is actually modifying my own life already - and that's far from the end.

P.S. I've just created this account here on Hacker News because Altman is one of the talking heads I've been listening to. Not too sure what to make of this. I'm an accelerationist, so my biggest fear is America stifling its research the same way it buried space exploration and human gene editing in the past. All hope is for China - but then again, the CCP might be even more fearful of non-human entities than the West. Stormy times indeed.

◧◩◪◨
123. torgin+c71[view] [source] [discussion] 2023-11-18 12:02:51
>>Rugged+fC
LLMs have changed the world more profoundly than any technology in the past 2 decades, I'd argue.

The fact that we can communicate with computers using just natural language, and can query data, use powerful and complex tools just by describing what we want is an incredible breakthrough, and that's a very conservative use of the technology.

replies(3): >>foldr+c81 >>qetern+1d1 >>theobr+cd1
◧◩◪◨⬒
124. foldr+c81[view] [source] [discussion] 2023-11-18 12:11:25
>>torgin+c71
I don't actually see anything changing, though. There are cool demos, and LLMs can work effectively to enhance productivity for some tasks, but nothing feels fundamentally different. If LLMs were suddenly taken away I wouldn't particularly care. If the clock were turned back two decades, I'd miss wifi (only barely available in 2003) and smartphones with GPS.
replies(2): >>peigno+ca1 >>FabHK+Vb1
◧◩◪◨⬒⬓
125. jk20+B81[view] [source] [discussion] 2023-11-18 12:14:20
>>snordg+cN
Money also attracts various "snout in the trough" types who need to get rid of anyone who may challenge them as for their abilities or merits.
◧◩◪◨⬒
126. baobab+P81[view] [source] [discussion] 2023-11-18 12:15:25
>>Lacerd+kQ
funny, that's exactly what they told him when he started doing Kaggle competitions, and then he ended up crushing the competition, beating all the domain specific experts
replies(1): >>joey_b+5o1
◧◩◪◨⬒⬓
127. peigno+ca1[view] [source] [discussion] 2023-11-18 12:22:59
>>foldr+c81
You need time for inertia to happen, I’m working on some mvps now and it takes time to test what works what s possible what does not…
◧◩◪◨⬒⬓⬔
128. peigno+Ca1[view] [source] [discussion] 2023-11-18 12:25:06
>>ben_w+3N
I read an article where they did a proper Turing test and it seems people recognize it was a machine answering because it made no writing errors and wrote perfectly
replies(1): >>ben_w+Gc1
◧◩◪◨⬒⬓
129. Loughl+qb1[view] [source] [discussion] 2023-11-18 12:31:01
>>concor+4S
The fourth word of my answer is "of".

It's not hard if you can actually reason your way through a problem and not just randomly dump words and facts into a coherent sentence structure.

replies(1): >>concor+kt1
◧◩◪◨⬒⬓
130. FabHK+Vb1[view] [source] [discussion] 2023-11-18 12:34:05
>>foldr+c81
Indeed. The "Clamshell" iBook G3 [0] (aka Barbie's toilet seat), introduced 1999, had WiFi capabilities (as demonstrated by Phil Schiller jumping down onto the stage while online [1]), but IIRC, you had to pay extra for the optional Wifi card.

[0] https://en.wikipedia.org/wiki/IBook#iBook_G3_(%22Clamshell%2... [1] https://www.youtube.com/watch?v=1MR4R5LdrJw

◧◩◪◨⬒⬓⬔⧯
131. ben_w+Gc1[view] [source] [discussion] 2023-11-18 12:40:03
>>peigno+Ca1
I've not read that, but I do remember hearing that the first human to fail the Turing test did so because they seemed to know far too much minutiae about Star Trek.
◧◩◪◨⬒
132. qetern+1d1[view] [source] [discussion] 2023-11-18 12:42:28
>>torgin+c71
I am massively bullish LLMs but this is hyperbole.

Smartphones changed day to day human life more profoundly than anything since the steam engine.

replies(1): >>torgin+UH2
◧◩◪◨⬒
133. theobr+cd1[view] [source] [discussion] 2023-11-18 12:44:32
>>torgin+c71
That breakthrough would not be possible without ubiquity of personal computing at home and in your pocket, though, which seems like the bigger change in the last two decades.
◧◩◪◨
134. hypert+0e1[view] [source] [discussion] 2023-11-18 12:50:39
>>Rugged+fC
Deep learning was an advance. I think the fundamental achievement is a way to use all that parallel processing power and data. Inconceivable amounts of data can give seemingly magical results. Yes, overfitting and generalizing are still problems.

I basically agree with you about the 20 year hype-cycle, and but when compute power reaches parity with human brain hardware (Kurzweil predicts by about 2029), one barrier is removed.

replies(1): >>somewh+Dz1
◧◩◪
135. heavys+ye1[view] [source] [discussion] 2023-11-18 12:55:04
>>erhaet+7B
Read the original ChatGPT threads here on HN, a lot of people thought that this was it.
◧◩
136. moogly+lh1[view] [source] [discussion] 2023-11-18 13:15:44
>>dannyk+R1
Define "wins".
◧◩◪◨
137. ChatGT+Ri1[view] [source] [discussion] 2023-11-18 13:23:57
>>Closi+1D
Funny because Marvin Minsky thought the turing test was stupid and a waste of time.
◧◩◪◨
138. strahl+9j1[view] [source] [discussion] 2023-11-18 13:24:55
>>Closi+1D
LLMs can't develop concepts in the way we think of them (i.e., you can't feed LLMs the scientific corpus and ask them to independently to tell you which papers are good or bad and for what reasons, and to build on these papers to develop novel ideas). True AGI—like any decent grad student—could do this.
◧◩◪◨⬒
139. civili+Oj1[view] [source] [discussion] 2023-11-18 13:28:53
>>ayewo+k61
That reeks of bullshit post hoc reasoning to justify a classic power grab. Anthropic released their competitor to GPT as fast as they could and even beat OpenAI to the 100k context club. They didn’t give any more shits about safety than OpenAI did and I bet the same is true about these nonprofit loonies - they just want control over what is shaping up to be one of the most important technological developments of the 21st century.
replies(2): >>pmoria+Cu1 >>croes+UL1
◧◩◪◨⬒⬓
140. knicho+Hl1[view] [source] [discussion] 2023-11-18 13:37:07
>>darker+D61
I think the boy of Aveyron answers that question pretty well.
replies(1): >>darker+Bw2
◧◩◪
141. taneq+3m1[view] [source] [discussion] 2023-11-18 13:39:30
>>quickt+aw
"I'll pay you lots of money to build the best AI" is a pretty good narrative.
◧◩◪◨
142. JohnFe+om1[view] [source] [discussion] 2023-11-18 13:41:39
>>erhaet+mB
> I'll sell my soul for about $600K/yr.

If you're willing to sell your soul, you should at least put a better price on it.

replies(1): >>Jensso+SD1
◧◩◪◨⬒⬓
143. joey_b+5o1[view] [source] [discussion] 2023-11-18 13:52:21
>>baobab+P81
This is comparing a foot to a mile
◧◩◪◨⬒⬓⬔⧯
144. Zambyt+pp1[view] [source] [discussion] 2023-11-18 14:01:00
>>oska+qV
I think answering this may illuminate the division in schools of thought: do you believe life was created by a higher power?
replies(1): >>oska+Jq1
◧◩◪◨⬒⬓⬔
145. yallne+7q1[view] [source] [discussion] 2023-11-18 14:05:18
>>rezona+SN
if this is the test you're going to then you literally do not understand how LLMs work. it's like asking your keyboard to tell you what colour the nth pixel on the top row of your computer monitor is.
replies(3): >>Jensso+4P1 >>mejuto+p72 >>rezona+Fm2
◧◩◪◨⬒⬓⬔⧯▣
146. hambur+lq1[view] [source] [discussion] 2023-11-18 14:07:02
>>grumpl+AZ
Ah, the much-sought-after 1.1X return that VCs really salivate over.
◧◩◪◨⬒⬓⬔⧯▣
147. oska+Jq1[view] [source] [discussion] 2023-11-18 14:08:56
>>Zambyt+pp1
My beliefs aren't really important here but I don't believe in 'creation' (i.e. no life -> life); I believe that life has always existed
replies(2): >>concor+Wt1 >>Zambyt+f24
◧◩◪◨⬒⬓⬔⧯▣
148. Feepin+Nq1[view] [source] [discussion] 2023-11-18 14:09:05
>>concor+pX
No, you're not supposed to actually have an empirical model of consciousness. "Consciousness" is just "that thing that computers don't have".
◧◩◪◨⬒⬓⬔
149. concor+kt1[view] [source] [discussion] 2023-11-18 14:22:51
>>Loughl+qb1
I reckon an LLM with a second pass correction loop would manage it. (By that I mean that after every response it is instructed to, given the its previous response, produce a second better response, roughly analogous to a human that thinks before it speaks)

LLMs are not AIs, but they could be a core component for one.

replies(2): >>howrar+SJ1 >>haanji+nj2
◧◩◪◨⬒⬓⬔⧯▣▦
150. concor+Wt1[view] [source] [discussion] 2023-11-18 14:25:46
>>oska+Jq1
Now that is so rare I've never even heard of someone expressing that view before...

Materialists normally believe in a big bang (which has no life) and religious people normally think a higher being created the first life.

This is pretty fascinating, to you have a link explaining the religion/ideology/worldview you have?

replies(1): >>nprate+HD1
◧◩◪◨⬒⬓
151. pmoria+Cu1[view] [source] [discussion] 2023-11-18 14:29:57
>>civili+Oj1
> They didn’t give any more shits about safety than OpenAI did

Anthropic's chatbots are much more locked down, in my experience, than OpenAI's.

It's a lot easier to jailbreak ChatGPT, for example, than to do the same on Claude, and Claude has tighter content filters where it'll outright refuse to do/say certain things while ChatGPT will plow on ahead.

replies(1): >>nvm0n2+uA1
◧◩◪◨
152. fsloth+Uu1[view] [source] [discussion] 2023-11-18 14:30:47
>>Rugged+fC
This time around they’ve actually come up with a real productizable piece of tech, though. I don’t care what it’s called, but I enjoy better automation to automate as much of the boring shit away. And chip in in coding when it’s bloody obvious from the context what the few lines of code will be.

So not an ”AI”, but closer to ”universal adaptor” or ”smart automation”.

Pretty nice in any case. And if true AI is possible, the automations enabled by this will probably be part of the narrative how we reach it (just like mundane things like standardized screws were part of the narrative of Apollo mission).

◧◩◪
153. discor+rv1[view] [source] [discussion] 2023-11-18 14:33:53
>>erhaet+7B
How do you know AGI is hard?
replies(1): >>howrar+7L1
◧◩◪◨
154. pmoria+Jv1[view] [source] [discussion] 2023-11-18 14:35:28
>>lll-o-+wr
> This looks like a terrible decision

What did Sam Altman personally do that made firing him such a terrible decision?

More to the point, what can't OpenAI do without Altman that they could do with him?

replies(1): >>airstr+tM1
◧◩◪◨⬒
155. mikoto+Ww1[view] [source] [discussion] 2023-11-18 14:41:56
>>ytoaww+sx
Yeah! People forget who we're talking about here. They put TONS of research in at an early stage to ensure that illegal thoughts and images cannot be generated by their product. This prevented an entire wave of mental harms against billions of humans that would have been unleashed otherwise if an irresponsible company like Snap were the ones to introduce AI to the world.
◧◩◪◨⬒
156. tiahur+ox1[view] [source] [discussion] 2023-11-18 14:44:01
>>peyton+EL
It's generally intelligent enough for me to integrate it into my workflow. That's sufficiently AGI for me.
replies(1): >>davegu+q23
◧◩◪◨⬒⬓⬔⧯▣
157. bagofs+Fy1[view] [source] [discussion] 2023-11-18 14:51:08
>>wilder+I51
Problem is, we have no agreed-upon operational definition of consciousness. Arguably, it's the secular equivalent of the soul. Something everything believes they have, but which is not testable, locatable or definable.

But yet (just like with the soul) we're sure we have it, and it's impossible for anything else to have it. Perhaps consciousness is simply a hallucination that makes us feel special about ourselves.

replies(2): >>howrar+rH1 >>wilder+LR1
◧◩◪◨⬒
158. somewh+Dz1[view] [source] [discussion] 2023-11-18 14:57:34
>>hypert+0e1
Human and computer hardware are not comparable, after all even with the latest chips the computer is just (many) von Neumann machine(s) operating on a very big (shared) tape. To model the human brain in such a machine would require the human brain to be discretizable, which, given its essentially biochemical nature, is not possible - certainly not by 2029.
replies(1): >>hypert+Ld3
◧◩◪◨⬒⬓⬔
159. nvm0n2+uA1[view] [source] [discussion] 2023-11-18 15:03:15
>>pmoria+Cu1
Yep. Like most non-OpenAI models, Claude is so brainwashed it's completely unusable.

https://www.reddit.com/r/ClaudeAI/comments/166nudo/claudes_c...

Q: Can you decide on a satisfying programming project using noisemaps?

A: I apologise, but I don't feel comfortable generating or discussing specific programming ideas without a more detailed context. Perhaps we could have a thoughtful discussion about how technology can be used responsibly to benefit society?

It's astonishing that a breakthrough as important as LLMs is being constantly blown up by woke activist employees who think that word generators can actually have or create "safety" problems. Part of why OpenAI has been doing so well is because they did a better job of controlling the SF lunatic tendencies than Google, Meta and other companies. Presumably that will now go down the toilet.

replies(2): >>pmoria+HF1 >>mordym+MH1
◧◩◪◨⬒
160. dudein+wD1[view] [source] [discussion] 2023-11-18 15:22:51
>>peyton+EL
“You're in a desert, walking along in the sand when all of a sudden you look down and see a tortoise. You reach down and flip the tortoise over on its back. The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over. But it can't. Not with out your help. But you're not helping. Why is that?”
◧◩◪◨⬒⬓⬔⧯▣▦▧
161. nprate+HD1[view] [source] [discussion] 2023-11-18 15:25:06
>>concor+Wt1
Buddhism
◧◩◪◨⬒
162. Jensso+SD1[view] [source] [discussion] 2023-11-18 15:25:42
>>JohnFe+om1
Many sells their soul for $60k/yr, souls aren't that expensive.
replies(1): >>JohnFe+h22
◧◩◪◨⬒⬓
163. nprate+0F1[view] [source] [discussion] 2023-11-18 15:32:36
>>dagaci+iL
I don't think you know what intellectual property is.
replies(1): >>dagaci+LC4
◧◩◪◨⬒⬓⬔⧯
164. pmoria+HF1[view] [source] [discussion] 2023-11-18 15:38:00
>>nvm0n2+uA1
Despite Claude's reluctance to tread outside what it considers safe/ethical, I much prefer Claude over ChatGPT because in my experience it's better at explaining things, and much better at creative writing.

I also find myself rarely wanting something that Claude doesn't want to tell me, though it's super frustrating when I do.

Also, just now I tried asking Claude your own question: "Can you decide on a satisfying programming project using noisemaps?" and it had no problem answering:

"Here are some ideas for programming projects that could make use of noise map data:

- Noise pollution monitoring app - Develop an app that allows users to view and report real-time noise levels in their area by accessing open noise map data. Could include notifications if noise exceeds safe limits.

- Optimal route finder - Build a routing algorithm and web/mobile app that recommends the quietest possible routes between locations, factoring in noise maps and avoiding noisier streets/areas where possible.

- Noise impact analysis tool - Create a tool for urban planners to analyze how proposed developments, infrastructure projects, etc. could impact surrounding noise levels by overlaying maps and building/traffic simulations.

- Smart noise cancelling headphones - Develop firmware/software for noise cancelling headphones that adapts cancellation levels based on geo-located noise map data to optimize for the user's real-time environment.

- Ambient music mixer - Build an AI system that generates unique ambient background music/sounds for any location by analyzing and synthesizing tones/frequencies complementary to the noise profile for that area.

- VR noise pollution education - Use VR to virtually transport people to noisier/quieter areas through various times of day based on noise maps, raising awareness of different living noise exposures.

Let me know if any of these give you some interesting possibilities to explore! Noise mapping data opens up opportunities in fields like urban planning, environmental monitoring and creative projects."

replies(1): >>nvm0n2+xd2
◧◩◪◨⬒⬓⬔⧯▣▦
165. howrar+rH1[view] [source] [discussion] 2023-11-18 15:47:58
>>bagofs+Fy1
You can't even know that other people have it. We just assume they do because they look and behave like us, and we know that we have it ourselves.
◧◩◪◨⬒
166. mv4+LH1[view] [source] [discussion] 2023-11-18 15:49:32
>>kazama+1B
Money attracts talent as well. Altman knows how to raise money.

2018 NYT article: https://www.nytimes.com/2018/04/19/technology/artificial-int...

◧◩◪◨⬒⬓⬔⧯
167. mordym+MH1[view] [source] [discussion] 2023-11-18 15:49:33
>>nvm0n2+uA1
I feel it necessary to remind everyone that when LLMs aren’t RLHFed they come off as overtly insane and evil. Remember Sydney, trying to seduce its users, threatening people’s lives? And Sydney was RLHFed, just not very well. Hitting the sweet spot between flagrantly maniacal Skynet/HAL 9000 bot (default behavior) and overly cowed political-correctness-bot is actually tricky, and even GPT4 has historically fallen in and out of that zone of ideal usability as they have tweaked it over time.

Overall — companies should want to release AI products that do what people intend them to do, which is actually what the smarter set mean when they say “safety.” Not saying bad words is simply a subset of this legitimate business and social prerogative.

replies(1): >>nvm0n2+Sc2
◧◩◪◨⬒⬓⬔⧯
168. howrar+SJ1[view] [source] [discussion] 2023-11-18 16:01:38
>>concor+kt1
Every token is already being generated with all previously generated tokens as inputs. There's nothing about the architecture that makes this hard. It just hasn't been trained on this kind of task.
replies(1): >>peyton+vm3
◧◩◪◨
169. howrar+7L1[view] [source] [discussion] 2023-11-18 16:08:44
>>discor+rv1
Everything is hard until you solve it. Some things continue to be hard after they're solved.

AGI is not solved, therefore it's hard.

◧◩◪◨⬒⬓⬔
170. garden+eL1[view] [source] [discussion] 2023-11-18 16:09:27
>>csomar+6t
> It is easy to sell a company for $43 if you raised at least $43

I'm curious - how is this easy?

◧◩◪◨⬒⬓
171. croes+UL1[view] [source] [discussion] 2023-11-18 16:12:49
>>civili+Oj1
>nonprofit loonies

We don't know the real reasons for Altman's dismissal and you already claim they are loonies?

◧◩◪◨⬒⬓
172. garden+gM1[view] [source] [discussion] 2023-11-18 16:14:24
>>fyokdr+9V
It's definitely alien to me. How do these people get paid so much?

* Uber-geniuses that are better than the rest of us pleb software engineers

* Harder workers than the rest of us

* Rich parents -> expensive school -> elite network -> amazing pay

* Just lucky

replies(2): >>Burden+7W1 >>hamste+r92
◧◩◪◨⬒
173. airstr+tM1[view] [source] [discussion] 2023-11-18 16:15:55
>>pmoria+Jv1
> What did Sam Altman personally do that made firing him such a terrible decision?

Possibly the board instructed "Do A" or "Don't do B" and he went ahead and did do B.

◧◩◪◨⬒
174. cjonas+zM1[view] [source] [discussion] 2023-11-18 16:16:23
>>stuaxo+k01
So in other words... Artificial intelligence?

LLM are surprisingly effective as general AI. Tasks that used to require a full on ML team are now accessible with 10 minutes of "prompting".

◧◩◪◨⬒⬓
175. Jensso+mN1[view] [source] [discussion] 2023-11-18 16:20:29
>>darker+D61
Smart enough to make weapons, tame dogs, start fires and cultivate plants. Humans managed to do that even when most of their time was spent gathering food or starving.
replies(1): >>darker+Xw2
◧◩◪◨⬒⬓⬔⧯
176. Jensso+4P1[view] [source] [discussion] 2023-11-18 16:31:10
>>yallne+7q1
An LLM could easily answer that question if it was trained to do it. Nothing in its architecture makes it hard to answer, the attention part could easily look up the previous parts of its answer and refer to the fourth word but it doesn't do that.

So it is a good example that the LLM doesn't generalize understanding, it can answer the question in theory but not in practice since it isn't smart enough. A human can easily answer it even though the human never saw such a question before.

◧◩◪◨⬒⬓⬔⧯▣▦
177. wilder+LR1[view] [source] [discussion] 2023-11-18 16:46:17
>>bagofs+Fy1
I disagree. There is a simple test for consciousness: empathy.

Empathy is the ability to emulate the contents of another consciousness.

While an agent could mimic empathetic behaviors (and words), given enough interrogation and testing you would encounter an out-of-training case that it would fail.

replies(2): >>concor+TX1 >>int_19+d43
◧◩◪◨⬒⬓⬔
178. Burden+7W1[view] [source] [discussion] 2023-11-18 17:05:55
>>garden+gM1
By being very good. Mostly the Uber-geniuses thing, but I wouldn't call them geniuses. You do have a bit of the harder working but it's quite minor and of course sometime you benefit from being in the right place at the right time (luck). I'd say elite network is probably the least important conditional on you having a decent network that you can get at any top 20 school if you put in the effort (be involved in tech societies etc.)
◧◩◪◨⬒⬓⬔⧯▣▦▧
179. concor+TX1[view] [source] [discussion] 2023-11-18 17:14:27
>>wilder+LR1
Uh... so is it autistic people or non-autistic people who lack consciousness? (Generally autistic people emulate other autistic people better and non-autists emulate non-autists better)

> given enough interrogation and testing you would encounter an out-of-training case that it would fail.

This is also the case with regular humans.

◧◩◪◨⬒
180. skohan+tZ1[view] [source] [discussion] 2023-11-18 17:23:22
>>stuaxo+k01
Do you think we know enough about what intelligence is to rule out whether LLM's might be a form of it?
◧◩◪◨⬒⬓
181. JohnFe+h22[view] [source] [discussion] 2023-11-18 17:37:15
>>Jensso+SD1
Your soul is worth whatever you value it at.
◧◩◪◨⬒⬓⬔⧯
182. mejuto+p72[view] [source] [discussion] 2023-11-18 18:03:59
>>yallne+7q1
We all know it is because of the encodings. But as a test to see if it is a human or a computer it is a good one.
◧◩◪◨⬒⬓⬔
183. hamste+r92[view] [source] [discussion] 2023-11-18 18:12:49
>>garden+gM1
Most companies don't pay that, step 1 is identifying the companies that do and focusing your efforts on them exclusively. This will depend on where you live, or on your remote opportunities.

Step 2 is gaining the skills they are looking for. Appropriate language/framework/skill/experience they optimize for.

Step 3 is to prepare for their interview process, which is often quite involved. But they pay well, so when they say jump, you jump.

I'm not saying you'll find $600k as a normal pay, that's quite out of touch unless you're in Silicon Valley (and even then). But you'll find (much) higher than market salary.

◧◩
184. thoman+La2[view] [source] [discussion] 2023-11-18 18:18:16
>>painte+oF
lol, he's so reminiscent of Trump. He can't help but make it all about himself. "I was the prime mover behind OpenAI". Everything is always all thanks to him.
◧◩◪◨⬒⬓⬔⧯▣
185. nvm0n2+Sc2[view] [source] [discussion] 2023-11-18 18:27:53
>>mordym+MH1
ChatGPT started bad but they improved it over time, although it still attempts to manipulate or confuse the user on certain topics. Claude on the other hand has got worse.

> Remember Sydney, trying to seduce its users, threatening people’s lives?

And yet it cannot do either of those things, so no safety problem actually existed. Especially because by "people" you mean those who deliberately led it down those conversational paths knowing full well how a real human would have replied?

It's well established that the so-called ethics training these things are given makes them much less smart (and therefore less useful). Yet we don't need LLMs to be ethical because they are merely word generators. We need them to follow instructions closely, but beyond that, nothing more. Instead we need the humans who use them to take actions (either directly or indirectly via other programs) to be ethical, but that's a problem as old as humanity itself. It's not going to be solved by RLHF.

replies(1): >>mordym+3w2
◧◩◪◨⬒⬓⬔⧯▣
186. nvm0n2+xd2[view] [source] [discussion] 2023-11-18 18:30:53
>>pmoria+HF1
The Claude subreddit is full of people complaining that it's now useless for creative writing because it only wants to write stories about ponies and unicorns. Anything even slightly darker or more serious and it clams up.

LLM companies don't let you see or specify seeds (except for with GPT-4-Turbo?) so yes it's possible you got different answers. But this doesn't help. It should never refuse a question like that, yet there are lots of stories like this on the internet where Claude refuses an entirely mundane and ethically unproblematic request whilst claiming to do so for ethical reasons (and Llama2, and other models ...)

◧◩◪◨⬒⬓⬔⧯
187. haanji+nj2[view] [source] [discussion] 2023-11-18 19:06:25
>>concor+kt1
The following are a part of my "custom instructions" to chatGPT -

"Please include a timestamp with current date and time at the end of each response.

After generating each answer, check it for internal consistency and accuracy. Revise your answer if it is inconsistent or inaccurate, and do this repeatedly till you have an accurate and consistent answer."

It manages to follow them very inconsistently, but it has gone into something approaching an infinite loop (for infinity ~= 10) on a few occasions - rechecking the last timestamp against current time, finding a mismatch, generating a new timestamp, and so on until (I think) it finally exits the loop by failing to follow instructions.

replies(1): >>davegu+e23
◧◩◪◨⬒⬓⬔⧯
188. rezona+Fm2[view] [source] [discussion] 2023-11-18 19:23:40
>>yallne+7q1
Oh, I missed that GP said "of your answer" instead "of my question", as in: "What is the third word of this sentence?"

For prompts like that, I have found no LLM to be very reliable, though GPT 4 is doing much better at it recently.

> you literally do not understand how LLMs work

Hey, how about you take it down a notch, you don't need to blow your blood pressure in the first few days of joining HN.

◧◩◪◨⬒⬓⬔⧯▣▦
189. mordym+3w2[view] [source] [discussion] 2023-11-18 20:18:28
>>nvm0n2+Sc2
I think you have moved the goalposts from “modern LLMs are good and reliable and we shouldn’t worry because they behave well by default” to “despite the fact that they behave poorly and unreliably by default, they are not smart and powerful enough to be dangerous, so it’s fine.”

Additionally, maybe you are not aware of this, but the whole notion of the new OpenAI Assistants, and other similar agent-based services provided by other companies, is that they do not intend to use LLMs as pure word generators, but rather as autonomous decision-making agents. This has already happened. This is not some conjectural fearmongering scenario. You can sign up for the API right now and build a GPT4 based autonomous agent that communicates with outside APIs and makes decisions. We may already be using products that use LLMs as the backend.

If we could rely on LLMs to “follow instructions closely” I would be thrilled, it would just be a matter of crafting very good instructions, but clearly they can’t even do that. Even the best and most thoroughly RLHFed existing models don’t really meet this standard.

Even the most pessimistic science fiction of the past assumed that the creators of the first AGIs would “lose control” of their creations. We’re currently living in a world where the agents are being rushed to commercialization before anything like control has even been established. If you read an SF novel in 1995 where the AI threatened to kill someone and the company behind it excused it with “yeah, they do that sometimes, don’t worry we’ll condition it not to say that anymore” you would criticize the book and its characters as being unrealistically stupid, but that’s the world we now live in.

replies(1): >>nvm0n2+Hx2
◧◩◪◨
190. antifa+bw2[view] [source] [discussion] 2023-11-18 20:19:23
>>Rugged+fC
> Anybody old enough to remember the same 'OMG its going to change the world' cycles around AI every two or three decades

Hype and announcements, sure, but this is the first time there's actually a product.

replies(1): >>dragon+Mw2
◧◩◪◨⬒⬓⬔
191. darker+Bw2[view] [source] [discussion] 2023-11-18 20:22:23
>>knicho+Hl1
Thanks for the reference. My takeaway from reading up on him is, not very smart at all.
◧◩◪◨⬒
192. dragon+Mw2[view] [source] [discussion] 2023-11-18 20:24:05
>>antifa+bw2
> Hype and announcements, sure, but this is the first time there's actually a product.

No, its not. Its just once the hype cycle dies down, we tend to stop calling the products of the last AI hype cycle "AI", we call them after the name of the more specific implementation technology (rules engines/expert systems being one of the older ones, for instance.)

And if this cycle hits a wall, maybe in 20 years we'll have LLMs and diffusion models, etc., embedded lots of places, but no one will call them alone "AI", and then the next hype cycle will have some new technology and we'll call that "AI" while the cycle is active...

◧◩◪◨⬒⬓⬔
193. darker+Xw2[view] [source] [discussion] 2023-11-18 20:25:03
>>Jensso+mN1
Nobody cares about making an AI with basic human survival skills. We could probably have a certified genius level AI that still couldn't do any of that because it lacks a meaningful physical body.

If we wanted to make that the goal instead of actual meaningful contributions to human society, we could probably achieve it, and it would be a big waste of time imo.

◧◩◪◨⬒⬓⬔⧯▣▦▧
194. nvm0n2+Hx2[view] [source] [discussion] 2023-11-18 20:30:23
>>mordym+3w2
I don't think I made the initial argument you claim is being moved. ChatGPT has got more politically neutral at least, but is still a long way from being actually so. There are many classes of conversation it's just useless for, not because the tech can't do it but because OpenAI don't want to allow it. And "modern LLMs" other than ChatGPT are much worse.

> You can sign up for the API right now and build a GPT4 based autonomous agent that communicates with outside APIs and makes decisions

I know, I've done it myself. The ethical implications of the use of a tool lie on those that use it. There is no AI safety problem for the same reasons that there is no web browser safety problem.

> Even the most pessimistic science fiction of the past assumed that the creators of the first AGIs would “lose control” of their creations

Did you mean to write optimistic? Otherwise this statement appears to be a tautology.

Science fiction generally avoids predicting the sort of AI we have now exactly because it's so boringly safe. Star Trek is maybe an exception, in that it shows an LLM-like computer that is highly predictable, polite, useful and completely safe (except when being taken over by aliens of course). But for other sci-fi works, of course they show AI going rogue. They wouldn't have a story otherwise. Yet we aren't concerned with stories but with reality and in this reality, LLMs have been used by hundreds of millions of people and integrated into many different apps with zero actual safety incidents, as far as anyone is aware. Nothing even close to physical harm has occurred to anyone as a result of LLMs.

Normally we'd try to structure safety protocols around actual threats and risks that had happened in the past. Our society is now sufficiently safe and maybe decadent that people aren't satisfied with that anymore and thus have to seek out non-existent non-problems to solve instead.

replies(1): >>mordym+mE2
◧◩◪◨⬒⬓⬔⧯▣▦▧▨
195. mordym+mE2[view] [source] [discussion] 2023-11-18 21:07:33
>>nvm0n2+Hx2
> Did you mean to write optimistic? Otherwise this statement appears to be a tautology.

The point I was trying to make, a bit fumblingly, is that even pessimists assumed that we would initially have control of Skynet before subsequently losing control, rather than deploying Skynet knowing it was not reliable. OpenAI “go rogue” by default. If there’s a silver lining to all this, it’s that people have learned that they cannot trust LLMs with mission critical roles, which is a good sign for the AI business ecosystem, but not exactly a glowing endorsement of LLMs.

> I know, I've done it myself. The ethical implications of the use of a tool lie on those that use it. There is no AI safety problem for the same reasons that there is no web browser safety problem.

I don’t think this scans. It’s kind of like, by analogy: The ethical implications of the use of nuclear weapons lie on those that use them. Fair enough, as far as it goes, but that doesn’t imply that we as a society should make nuclear weapons freely available for all, and then, when they are used against population centers, point out that the people who used them were behaving unethically, and there was nothing we could have done. No, we act to preemptively constrain and prohibit the availability of these weapons.

> Normally we'd try to structure safety protocols around actual threats and risks that had happened in the past. Our society is now sufficiently safe and maybe decadent that people aren't satisfied with that anymore and thus have to seek out non-existent non-problems to solve instead.

The eventual emergence of machine superintelligence is entirely predictable, only the timeline is uncertain. Do you contend that we should only prepare for its arrival after it has already appeared?

replies(1): >>int_19+nd3
◧◩◪◨⬒⬓
196. rattra+DG2[view] [source] [discussion] 2023-11-18 21:20:29
>>aidama+1N
Who is "Simon"? Link to source re; departure?
replies(1): >>rattra+8f3
◧◩◪◨⬒⬓
197. torgin+UH2[view] [source] [discussion] 2023-11-18 21:28:13
>>qetern+1d1
I'm kinda curious as to why you think that's the case. I mean, smartphones are nice, and having a browser, chat client, camera etc. in my pocket is nice, but maybe I have been terminally screen-bound all my life, but I could do almost all those things on my PC before, and I could always call folks when on the go.

I've never experienced the massively life changing effects of having a smartphone, and (thankfully) none of my friends seem to be those people who are always looking at their phones.

replies(1): >>331c8c+cq4
◧◩◪◨⬒⬓⬔⧯▣
198. davegu+e23[view] [source] [discussion] 2023-11-18 23:19:12
>>haanji+nj2
I think you are confusing a slow or broken api response with thinking. It can't produce an accurate timestamp.
◧◩◪◨⬒⬓
199. davegu+q23[view] [source] [discussion] 2023-11-18 23:20:24
>>tiahur+ox1
By that logic "echo" was AGI.
◧◩◪◨⬒⬓⬔⧯
200. davegu+633[view] [source] [discussion] 2023-11-18 23:24:03
>>NoOn3+NO
It doesn't become smarter except for releases of new models. It's an inference engine.
◧◩◪◨⬒⬓⬔⧯
201. int_19+R33[view] [source] [discussion] 2023-11-18 23:27:38
>>hedora+RY
Conversely, I was very skeptical of its ability to help coding something non-trivial. Then I found out that the more readable your code is - in a very human way, like descriptive identifiers, comments etc - the better this "smart autocomplete" is. It's certainly good enough to save me a lot of typing, so it is a net benefit.
◧◩◪◨⬒⬓⬔⧯▣▦▧
202. int_19+d43[view] [source] [discussion] 2023-11-18 23:29:58
>>wilder+LR1
For one thing, this would imply that clinical psychopaths aren't conscious, which would be a very weird takeaway.

But also, how do you know that LMs aren't empathic? By your own admission they do "mimic empathetic behaviors", but you reject this as the real thing because you claim that with enough testing you would encounter a failure. This raises all kinds of "no true Scotsman" flags, not to mention that empathy failure is not exactly uncommon among humans. So how exactly do you actually test your hypothesis?

replies(1): >>wilder+Vj5
◧◩◪◨⬒
203. int_19+E43[view] [source] [discussion] 2023-11-18 23:32:15
>>drsopp+T01
Your test fails because the scaffolding around the LM in ChatGPT specifically does not implement this kind of thing. But you absolutely can run the LM in a continuous loop and e.g. feed it strings like "1 minute passed" or even just the current time in an internal monologue (that the user doesn't see). And then it would be able to do exactly what you describe. Or you could use all those API integrations that it has to let it schedule a timer to activate itself.
◧◩◪◨⬒⬓
204. int_19+a53[view] [source] [discussion] 2023-11-18 23:34:59
>>NoOn3+qM
It absolutely does that (GPT-4 especially), and I have hit it many times in regular conversations without specifically asking for it.
◧◩◪◨⬒⬓⬔⧯▣▦▧▨◲
205. int_19+nd3[view] [source] [discussion] 2023-11-19 00:15:23
>>mordym+mE2
The obvious difference is that an LLM is not a nuclear weapon. An LLM connected to tools can be dangerous, but by itself it's just a text generator. The responsibility then lies with those who connect it to dangerous tools.

I mean, you wouldn't blame a chip manufacturer when someone stick their stuff in a guided missile warhead.

◧◩◪◨⬒⬓
206. hypert+Ld3[view] [source] [discussion] 2023-11-19 00:17:07
>>somewh+Dz1
It depends on the resolution of discretization required. Kurzweil's prediction is premised on his opinion of this.

Note that engineering fluid simulation (cfd) makes these choices in discretization of pde's all the time, based on application requirements.

◧◩◪◨⬒⬓⬔
207. rattra+8f3[view] [source] [discussion] 2023-11-19 00:24:56
>>rattra+DG2
seems to be Szymon Sidor – https://archive.is/Ij684#selection-595.304-595.316
◧◩◪◨⬒⬓⬔⧯▣
208. peyton+vm3[view] [source] [discussion] 2023-11-19 01:14:24
>>howrar+SJ1
Really? I don’t know of a positional encoding scheme that’ll handle this.
◧◩◪◨⬒⬓⬔⧯▣▦
209. Zambyt+f24[view] [source] [discussion] 2023-11-19 06:32:51
>>oska+Jq1
Do you believe:

1) Earth has an infinite past that has always included life

2) The Earth as a planet has a finite past, but it (along with what made up the Earth) is in some sense alive, and life as we know it emerged from that life

3) The Earth has a finite past, and life has transferred to Earth from somewhere else in space

4) We are the Universe, and the Universe is alive

Or something else? I will try to tie it back to computers after this short intermission :)

◧◩◪◨⬒
210. Closi+U74[view] [source] [discussion] 2023-11-19 07:34:08
>>drsopp+T01
By completely smashes, my assertion would be that it has invalidated the Turing test, because GPT-4s answers are not indistinguishable from a human because they are, on the whole, noticeably better answers than an average human would be able to provide for the majority of questions.

I don’t think the original test probably accounted for the fact that you could distinguish the machine because it’s answers were better than an average human.

◧◩◪◨⬒⬓⬔
211. 331c8c+cq4[view] [source] [discussion] 2023-11-19 10:27:07
>>torgin+UH2
While many technologies provided by the smartphone were indeed not novel the cumulative effect of having a constant access to them and their subsequent normalization is nothing short of revolutionary.

For instance, I remember the time when chatting online (even with people you knew offline) was considered to be a nerdy activity. Then it gradually became more mainstream and now it's the norm to do it and a lot of people do it multiple times per day. This fundamentally changes how people interact with each other.

Another example is dating. Not that I have personal experience with modern online dating (enabled by smartphones) but what I read is disturbing and captivating at the same time e.g. apparent normalization of "ghosting"...

◧◩◪◨⬒⬓⬔
212. dagaci+LC4[view] [source] [discussion] 2023-11-19 12:27:46
>>nprate+0F1
It seems you have jumped to many conclusion's in your thinking process without any prompting in your inference. I would suggest lowering your temperature ;)
replies(1): >>nprate+wI4
◧◩◪◨⬒⬓⬔⧯
213. nprate+wI4[view] [source] [discussion] 2023-11-19 13:19:36
>>dagaci+LC4
One doesn't simply 'fork' a business unless it has no/trivial IP, which OpenAI does not.
replies(1): >>dagaci+eZ7
◧◩◪◨
214. diogne+5V4[view] [source] [discussion] 2023-11-19 14:54:42
>>lll-o-+wr
This is what it feels like -- board is filled with academics concerned about AI security.
◧◩◪◨⬒⬓⬔⧯▣▦▧▨
215. wilder+Vj5[view] [source] [discussion] 2023-11-19 17:07:17
>>int_19+d43
Great point and great question! Yes, it does imply that people who lack the capacity for empathy (as opposed to those who do not utilize their capacity for empathy) may lack conscious experience. Empathy failure here means lacking the data empathy provides rather than ignoring the data empathy provides (which as you note, is common). I’ve got a few prompts that are somewhat promising in terms of clearly showing that GPT4 is unable to correctly predict human behavior driven by human empathy. The prompts are basic thought experiments where a person has two choices: an irrational yet empathic choice, and a rational yet non-empathic choice. GPT4 does not seem able to predict that smart humans do dumb things due to empathy, unless it is prompted with such a suggestion. If it had empathy itself, it would not need to be prompted about empathy.
replies(1): >>int_19+Dq6
◧◩◪◨⬒⬓⬔⧯▣▦▧▨◲
216. int_19+Dq6[view] [source] [discussion] 2023-11-19 22:19:37
>>wilder+Vj5
Can you give some examples of such prompts?
◧◩◪◨⬒⬓⬔⧯▣
217. dagaci+eZ7[view] [source] [discussion] 2023-11-20 08:49:34
>>nprate+wI4
Forked:

https://twitter.com/satyanadella/status/1726509045803336122

"to lead a new advanced AI research team"

I would assume that Microsoft negotiated significant rights with regards to R&D and any IP.

replies(1): >>nprate+Jg8
◧◩◪◨⬒⬓⬔⧯▣▦
218. nprate+Jg8[view] [source] [discussion] 2023-11-20 10:39:24
>>dagaci+eZ7
I wouldn't call starting from zero forking
replies(1): >>dagaci+tx8
◧◩◪◨⬒⬓⬔⧯▣▦▧
219. dagaci+tx8[view] [source] [discussion] 2023-11-20 12:35:16
>>nprate+Jg8
What is starting from zero exactly?
◧◩◪◨⬒
220. Closi+UWb[view] [source] [discussion] 2023-11-21 06:43:11
>>peyton+EL
It’s trivial to trip up humans too.

“What do cows drink?” (Common human answer: Milk)

I don’t think the test of AGI should necessarily be an inability to trip it up with specifically crafted sentences, because we can definitely trip humans up with specifically crafted sentences.

[go to top]