zlacker

[parent] [thread] 3 comments
1. dahwol+(OP)[view] [source] 2023-05-16 19:45:03
It's easy to tell if an AI head genuinely cares about the impact of AI on society: they only talk about AI's output, never its input.

They train their models on the sum of humanity's digital labor and creativity and do so without permission, attribution or compensation. You'll never hear a word about this from them, which means ethics isn't a priority. It's all optics.

replies(1): >>precom+74
2. precom+74[view] [source] 2023-05-16 20:05:16
>>dahwol+(OP)
Yep. No page on OpenAI's website about the thousands of underpaid third-world workers that sit and label the data. They will try and build momentum and avoid the "uncomfortable" questions at all costs.
replies(1): >>dahwol+e6
◧◩
3. dahwol+e6[view] [source] [discussion] 2023-05-16 20:16:54
>>precom+74
I empathize with that issue, especially the underpaid part, but superficially that work is still a type of value exchange based on consent: you do the labeling, you get paid (poorly).

Yet for the issue I discussed, there's no value exchange at all. There's no permission or compensation for the people that have done the actual work of producing the training material.

replies(1): >>precom+Ra
◧◩◪
4. precom+Ra[view] [source] [discussion] 2023-05-16 20:40:50
>>dahwol+e6
Oh yeah. And labeling it as an "AI" further obfuscates it. But apart from small gestures catered to people whose work is very "unique" / identifiable, no one else will get a kickback. They only need to kick the ball further for a couple more years and then it'll become a non-issue as linkrot takes over. Or maybe they use non-public domain stuff, maybe they have secret deals with publishers.

Heck, sometimes even google doesn't pay people for introducing new languages to their translation thingy.

https://restofworld.org/2023/google-translate-sorani-kurdish...

[go to top]