zlacker

[return to "OpenAI is now everything it promised not to be: closed-source and for-profit"]
1. mellos+pe[view] [source] 2023-03-01 10:46:59
>>isaacf+(OP)
This seems an important article, if for no other reason than it brings the betrayal of its foundational claim still brazenly present in OpenAI's name from the obscurity of HN comments going back years into the public light and the mainstream.

They've achieved marvellous things, OpenAI, but the pivot and long-standing refusal to deal with it honestly leaves an unpleasant taste, and doesn't bode well for the future, especially considering the enormous ethical implications of advantage in the field they are leading.

◧◩
2. IAmNot+Po1[view] [source] 2023-03-01 17:35:21
>>mellos+pe
It is nice to see normies noticing and caring, but the article leaves out some details that obscure comments still stubbornly bring up: like Musk founded it as a 501(c)(3) and put Altman in charge, and only once he had to leave with conflicts of interest Altman founded "OpenAI LP," the for-profit workaround so they didn't have to obey those pesky charity rules. That's when they stopped releasing models and weights, and started making their transparent claims that "the most ethical way to give people access to charge them fucktons of money and rip the API away when we feel like it."
◧◩◪
3. camill+Lp1[view] [source] 2023-03-01 17:38:40
>>IAmNot+Po1
People keep forgetting we're talking about Sam Altman, someone who believed that scanning the world's population retinas in exchange for some crappy digital coin was a great idea and not a creepy spinoff of a shoddy sci-fi novel.
◧◩◪◨
4. norswa+PSx[view] [source] 2023-03-11 17:03:26
>>camill+Lp1
It's important to note that WorldCoin does not get to see the retinas, whose picture never leave the orb (and are deleted after the fact), but only a hash of them. The orb's hardware blueprints and the software is open source too.

The system attempts to solve an important problem: figure out who's a human (not a robot) online. One could argue Sam is creating the problem as well as the solution, I suppose. Still, it's better than only having the problem.

Right now the problem does not seem extremely pressing, but I believe it might become more so.

Even if we don't see rampant abuse of AIs masquerading as humans, another ambition of WorldCoin is to perform wide-ranging experiments in UBI, and being able to distinguish "real" humans in that context is absolutely crucial. This goes doubly in the third world, where people often simply don't have IDs (and available forms of IDs can be easily manufactured through bribery).

(That being said, I broadly agree with the criticism of OpenAI laid out in the above article. Still, we can have nuance.)

[go to top]