zlacker

[parent] [thread] 6 comments
1. jrm4+(OP)[view] [source] 2023-11-20 02:45:59
Lol. So I didn't get past the few paragraphs before the paywall, and I didn't need to.

I appreciate the idea of being a "not-greedy typical company," but there's a reason you e.g. separate university type research or non-profits and private companies.

Trying to make up something in the middle is the exact sort of naivete you can ALWAYS expect from Silicon Valley.

replies(2): >>cmrdpo+82 >>calf+Xh
2. cmrdpo+82[view] [source] 2023-11-20 03:00:33
>>jrm4+(OP)
Yeah: https://www.youtube.com/watch?v=j_JW9NwjLF8

I'd say yes, Sutsksever is... naive? though very smart. Or just utopian. Seems he couldn't get the scale he needed/wanted out of a university (or Google) research lab. But the former at least would have bounded things better in the way he would have preferred, from an ethics POV.

Jumping into bed with Musk and Altman and hoping for ethical non-profit "betterment of humanity" behaviour is laughable. Getting access to capital was obviously tempting, but ...

As for Altman. No, he's not naive. Amoral, and likely proud of it. JFC ... Worldcoin... I can't even...

I don't want either of these people in charge of the future, frankly.

It does point to the general lack of funding for R&D of this type of thing. Or it's still too early to be doing this kind of thing at scale. I dunno.

Bleak.

replies(2): >>jrm4+QP2 >>physic+u65
3. calf+Xh[view] [source] 2023-11-20 05:26:27
>>jrm4+(OP)
The professors at the summer seminar at the Simons Insitute complained/explained (with Ilya himself present) that this research was impossible to do in university lab setting, because of the scale needed.

So I would say ChatGPT exists because its creators specifically transgressed the traditional division of universities vs industry. The fact that this transgressive structure is unstable is not surprising, at least in retrospect.

Indeed, the only other approach I can think of is a massive government project. But again with gov't bureaucracy, a researcher would be limited by legal issues of big data vs copyright, etc.--which many have pointed out that OpenAI again was able to circumvent when they basically used the entire Internet and all of humanity's books, etc., as their training source.

replies(1): >>jrm4+LO2
◧◩
4. jrm4+LO2[view] [source] [discussion] 2023-11-20 18:44:11
>>calf+Xh
Kind of feels like "bureaucracy actually working as intended."

I think it at least remains to be seen as to whether "rampant copyright infringement" is necessarily a good thing here.

◧◩
5. jrm4+QP2[view] [source] [discussion] 2023-11-20 18:48:02
>>cmrdpo+82
Bleak? Why?

Honestly, this seems like a pretty good outcome.

Which is to say, I think the fearmongering sentient AI stuff is silly -- but I think we are all DEFINITELY better off with an ugly rough-and-tumble visible rocky start to the AI revolution.

Weed out the BS; equalize out who actually has access to the best stuff BEFORE some jerk company can scale up fast and dominate market share; let a de-facto "open source" market have a go at the whole thing.

replies(1): >>cmrdpo+0H4
◧◩◪
6. cmrdpo+0H4[view] [source] [discussion] 2023-11-21 05:17:23
>>jrm4+QP2
The bleak reality is that OpenAI became that jerk company, and now in its effective demise, those reigns are handed over to Microsoft. Who have already demonstrated (with CoPilot) a similar lack of ... concern ... for IP rights / authorship, and various other ethical aspects.

And bleak because there doesn't seem to be an alternative where the people making these decisions are responsible to an electorate or public in some democratic fashion. Just a bunch of people with $$ and influence who set themselves up to be arbiters ... and

It's just might makes right.

And bleak because in this case the "mighty" are often the very people who made fun of arts students who took the philosophy and ethics classes in school that could at least offer some insight in these issues.

◧◩
7. physic+u65[view] [source] [discussion] 2023-11-21 09:07:46
>>cmrdpo+82
> Jumping into bed with Musk and Altman and hoping for ethical non-profit "betterment of humanity" behaviour is laughable.

Now it's laughable, but OpenAI was founded in 2015. I don't know about Altman, but Musk was very respected at the time. He didn't start going off the deep end until 2017. "I'm motivated by... a desire to think about the future and not be sad," was something he said during a TED interview in 2017, and people mostly believed him.

[go to top]