zlacker

[parent] [thread] 33 comments
1. convex+(OP)[view] [source] 2023-11-18 03:08:44
Sutskever: "You can call it (a coup), and I can understand why you chose this word, but I disagree with this. This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity." Scoop: theinformation.com

https://twitter.com/GaryMarcus/status/1725707548106580255

replies(4): >>jojoba+jm >>mym199+Wu >>moffka+uQ >>wheele+xQ
2. jojoba+jm[view] [source] 2023-11-18 05:50:45
>>convex+(OP)
The moment they lobotomized their flagship AI chatbot into a particular set of political positions the "benefits of all humanity" were out the window.
replies(2): >>lijok+Po >>emoden+1t
◧◩
3. lijok+Po[view] [source] [discussion] 2023-11-18 06:12:11
>>jojoba+jm
If they hadn’t done that, would they have been able to get to where they are? Goal oriented teams don’t tend to care about something as inconsequential as this
replies(1): >>Booris+ou
◧◩
4. emoden+1t[view] [source] [discussion] 2023-11-18 06:56:16
>>jojoba+jm
One could quite reasonably dispute the notion that being allowed to generate hate speech or whatever furthers the benefits of all humanity.
replies(2): >>jojoba+4w >>oska+9T
◧◩◪
5. Booris+ou[view] [source] [discussion] 2023-11-18 07:08:18
>>lijok+Po
I don't agree with the "noble lie" hypothesis of current AI. That being said I'm not sure why you're couching it that way: they got where they are they got where they are because they spent less time trying to inject safety at a time where capabilities didn't make it unsafe, than their competitors.

Google could have given us GPT-4 if they weren't busy tearing themselves asundre with people convinced a GPT-3 level model was sentient, and now we see OpenAI can't seem to escape that same poison

replies(1): >>lmm+lG
6. mym199+Wu[view] [source] 2023-11-18 07:12:45
>>convex+(OP)
Something that benefits all of humanity in one person's or organization's eye can still have severely terrible outcomes for sub-sections of humanity.
replies(1): >>edgyqu+lv
◧◩
7. edgyqu+lv[view] [source] [discussion] 2023-11-18 07:16:38
>>mym199+Wu
No it cant, that’s literally a contradictory statement
replies(1): >>midasu+CQ
◧◩◪
8. jojoba+4w[view] [source] [discussion] 2023-11-18 07:23:39
>>emoden+1t
It happily answers what good Obama did during his presidency but refuses to answer about Trump's, for one. Doesn't say "nothing", just gives you a boilerplate about being an LLM and not taking political positions. How much of hate speech would that be?
replies(2): >>Arisak+Fx >>jakder+xH
◧◩◪◨
9. Arisak+Fx[view] [source] [discussion] 2023-11-18 07:40:09
>>jojoba+4w
I just asked it, and oddly enough answered both questions, listing items and adding "It's important to note that opinions on the success and impact of these actions may vary".

I wouldn't say "refuses to answer" for that.

◧◩◪◨
10. lmm+lG[view] [source] [discussion] 2023-11-18 08:59:25
>>Booris+ou
> Google could have given us GPT-4 if they weren't busy tearing themselves asundre with people convinced a GPT-3 level model was sentient,

Doubt. When was the last time Google showed they had the ability to execute on anything?

replies(1): >>Booris+JW
◧◩◪◨
11. jakder+xH[view] [source] [discussion] 2023-11-18 09:07:55
>>jojoba+4w
>It happily answers what good Obama did

"happily"? wtf?

12. moffka+uQ[view] [source] 2023-11-18 10:25:14
>>convex+(OP)
"He said what about my hair?!"

"..."

"The man's gotta go."

- Sutskever, probably

replies(1): >>justin+za3
13. wheele+xQ[view] [source] 2023-11-18 10:25:34
>>convex+(OP)
That "the most important company in the world" bit is so out of touch with reality.

Imagine the hubris.

replies(2): >>bl0rg+f31 >>mycolo+Nd1
◧◩◪
14. midasu+CQ[view] [source] [discussion] 2023-11-18 10:25:48
>>edgyqu+lv
The Industrial Revolution had massive positive outcomes for humanity as a whole.

Those who lost their livelihoods and then died did not get those positive outcomes.

replies(1): >>bambax+511
◧◩◪
15. oska+9T[view] [source] [discussion] 2023-11-18 10:47:14
>>emoden+1t
'Hate speech' is not an objective category, nor can a machine feel hate
◧◩◪◨⬒
16. Booris+JW[view] [source] [discussion] 2023-11-18 11:18:04
>>lmm+lG
My comment: "Google could execute if not for <insert thing they're doing wrong>"

How is your comment doubting that? Do you have an alternative reason, or you think they're executing and mistyped?

replies(1): >>lmm+1L3
◧◩◪◨
17. bambax+511[view] [source] [discussion] 2023-11-18 11:50:05
>>midasu+CQ
It could be argued that the Industrial Revolution was the beginning of the end.

For instance, it's still very possible that humanity will eventually destroy itself with atomic bombs (getting more likely every day).

replies(1): >>lordfr+Rg1
◧◩
18. bl0rg+f31[view] [source] [discussion] 2023-11-18 12:04:27
>>wheele+xQ
I'd argue they are the closest to AGI (how far off that is no one knows). That would make them a strong contender for the most important company in the world in my book.
replies(1): >>wheele+h61
◧◩◪
19. wheele+h61[view] [source] [discussion] 2023-11-18 12:24:59
>>bl0rg+f31
AGI without a body is just a glorified chatbot that is dependant on available, human provided resources.

To create true AGI, you would need to make the software aware of its surroundings and provide it with a way to experience the real world.

replies(3): >>mritch+Zc1 >>bl0rg+3n1 >>snoman+hV1
◧◩◪◨
20. mritch+Zc1[view] [source] [discussion] 2023-11-18 13:15:09
>>wheele+h61
vision API is pretty good, have you tried it?
◧◩
21. mycolo+Nd1[view] [source] [discussion] 2023-11-18 13:19:27
>>wheele+xQ
"Most important company in the world" is text from a question somebody (I think the journalist?) asked, not from Sutskever himself.
replies(1): >>wheele+sl1
◧◩◪◨⬒
22. lordfr+Rg1[view] [source] [discussion] 2023-11-18 13:34:32
>>bambax+511
> It could be argued that the Industrial Revolution was the beginning of the end.

"Many were increasingly of the opinion that they’d all made a big mistake in coming down from the trees in the first place. And some said that even the trees had been a bad move, and that no one should ever have left the oceans"

replies(1): >>Calami+2m3
◧◩◪
23. wheele+sl1[view] [source] [discussion] 2023-11-18 14:02:48
>>mycolo+Nd1
I know. I was quoting the article piece.
replies(1): >>mycolo+Vk2
◧◩◪◨
24. bl0rg+3n1[view] [source] [discussion] 2023-11-18 14:12:26
>>wheele+h61
Even if that was true, do you think it would be hard to hook it up to a Boston Dynamics robot and potentially add a few sensors? I reckon that could be done in an afternoon (by humans), or a few seconds (by the AGI). I feel like I'm missing your point.
replies(2): >>wheele+pA1 >>Philpa+kH1
◧◩◪◨⬒
25. wheele+pA1[view] [source] [discussion] 2023-11-18 15:30:55
>>bl0rg+3n1
Well, we don't know how hard it is. But if it hasn't been done yet, it must be much harder than most people think.

If you do manage to make a thinking, working AGI machine, would you call it "a living being"?

No, the machine still needs to have individuality, a way to experience "oness" that all living humans (and perhaps animals, we don't know) feel. Some call it "a soul", others "consciousness".

The machine would have to live independently from its creators, to be self-aware, to multiply. Otherwise, it is just a shell filled with random data gathered from the Internet and its surroundings.

◧◩◪◨⬒
26. Philpa+kH1[view] [source] [discussion] 2023-11-18 16:11:08
>>bl0rg+3n1
It's so incredibly not-difficult that Boston Dynamics themselves already did it https://www.youtube.com/watch?v=djzOBZUFzTw
◧◩◪◨
27. snoman+hV1[view] [source] [discussion] 2023-11-18 17:23:49
>>wheele+h61
AGI with agent architectures (ie giving the AI access to APIs) will be bonkers.

An AI without a body, but access to every API currently hosted on the internet, and the ability to reason about them and compose them… that is something that needs serious consideration.

It sounds like you’re dismissing it because it won’t fit the mold of sci-fi humanoid-like robots, and I think that’s a big miss.

◧◩◪◨
28. mycolo+Vk2[view] [source] [discussion] 2023-11-18 19:38:21
>>wheele+sl1
But it doesn't make sense for the journalist to have hubris about OpenAI.
◧◩
29. justin+za3[view] [source] [discussion] 2023-11-19 00:22:57
>>moffka+uQ
George Lucas's neck used to have a blog [0] but it's been inactive in recent years. If Ilya reaches a certain level of fame, perhaps his hair will be able to persuade George's neck to come out of retirement and team up on a YouTube channel or something.

[0] https://georgelucasneck.tumblr.com/

◧◩◪◨⬒⬓
30. Calami+2m3[view] [source] [discussion] 2023-11-19 01:44:23
>>lordfr+Rg1
One of my favorite thought nuggets from Douglas Adams
◧◩◪◨⬒⬓
31. lmm+1L3[view] [source] [discussion] 2023-11-19 04:29:22
>>Booris+JW
Your comment was "Google could execute if not for <thing extremely specific to this particular field>". Given Google's recent track record I think any kind of specific problem like that is at most a symptom; their dysfunction runs a lot deeper.
replies(1): >>Booris+E34
◧◩◪◨⬒⬓⬔
32. Booris+E34[view] [source] [discussion] 2023-11-19 07:34:33
>>lmm+1L3
If you think a power structure that allows people to impose their will in a way that doesn't align with delivering value to your end user is "extremely specific to this particular field", I don't think you've reached the table stakes for examining Google's track record.

There's nothing "specific" about being crippled by people pushing an agenda, you'd think the fact this post was about Sam Altman of OpenAI being fired would make that clear enough.

replies(1): >>lmm+qk6
◧◩◪◨⬒⬓⬔⧯
33. lmm+qk6[view] [source] [discussion] 2023-11-19 22:10:51
>>Booris+E34
If you were trying to express "a power structure that allows people to impose their will in a way that doesn't align with delivering value to your end user", writing "tearing themselves asundre with people convinced a GPT-3 level model was sentient" was a very poor way to communicate that.
replies(1): >>Booris+cgb
◧◩◪◨⬒⬓⬔⧯▣
34. Booris+cgb[view] [source] [discussion] 2023-11-21 01:50:52
>>lmm+qk6
It's a great way since I'm writing for people who have context. Not everything should be written for the lowest common denominator, and if you lack context you can ask for it instead of going "Doubt. <insert comment making it clear you should have just asked for context>"
[go to top]