zlacker

[return to "OpenAI is now everything it promised not to be: closed-source and for-profit"]
1. mellos+pe[view] [source] 2023-03-01 10:46:59
>>isaacf+(OP)
This seems an important article, if for no other reason than it brings the betrayal of its foundational claim still brazenly present in OpenAI's name from the obscurity of HN comments going back years into the public light and the mainstream.

They've achieved marvellous things, OpenAI, but the pivot and long-standing refusal to deal with it honestly leaves an unpleasant taste, and doesn't bode well for the future, especially considering the enormous ethical implications of advantage in the field they are leading.

◧◩
2. ripper+yr[view] [source] 2023-03-01 12:38:21
>>mellos+pe
To quote Spaceballs, they're not doing it for money, they're doing it for a shitload of money.
◧◩◪
3. djyaz1+N21[view] [source] 2023-03-01 16:16:09
>>ripper+yr
They are following the Residential Real Estate Developer Playbook, always name your for profit exploitation project after the beautiful organic thing you destroyed to pave the way for it...

Examples Whispering pines, Blue Heron Bay, OpenAI

◧◩◪◨
4. startu+wa1[view] [source] 2023-03-01 16:45:18
>>djyaz1+N21
Hadn’t OpenAI published all the key research results to repro ChatGPT? And made their model available to literally everyone? And contributed more than anyone else to AI alignment/safety?

To me it looks like nearly every other player, including open source projects are there for short term fame and profit, while it’s the OpenAI that is playing the long game of AI alignment.

◧◩◪◨⬒
5. Fillig+Uk1[view] [source] 2023-03-01 17:22:38
>>startu+wa1
You're not wrong, but existential threats and possible extinction is in the future; maybe ten-fifteen years away if we're lucky.

Meanwhile, we don't get to play with their models right now. Obviously that's what we should be concerned about.

◧◩◪◨⬒⬓
6. Julesm+mn1[view] [source] 2023-03-01 17:30:12
>>Fillig+Uk1
Among all the accepted threats to humanity's future, AI is one of the least founded at this point. We all grew up with this cautionary fiction. But unless you know something everyone else doesn't, the near term existential threat of AI is relatively low.
◧◩◪◨⬒⬓⬔
7. mach1n+IP1[view] [source] 2023-03-01 19:25:52
>>Julesm+mn1
>Near term relatively low

Precisely. Above 1% so in the realm of possible, but definitely not above 50% and probably not above 5% in the next 10-15 years. My guesstimate is around 1-2%.

But expand the time horizon to the next 50 years and the cognitive fallacy of underestimating long-term progress kicks in. That’s the timescale that actually produces scary high existential risk with our current trajectory of progress.

[go to top]