zlacker

[parent] [thread] 1 comments
1. dkarra+(OP)[view] [source] 2023-11-20 03:17:45
The problem with the idealistic "we do research on alignment as we discover AGI, don't care about money" angle is that... you are not the only ones doing it. And OpenAI is trying to do it with its hands tied behind its back (non-profit status and vibes). There are and will be companies (like Anthropic) doing the same work themselves, they will do it for profit on the side, will rake in billions, possible become the most valuable country on Earth, build massive research and development labs etc. Then they will define what alignment is, not OpenAI. So for OpenAI to reach its goal, if they want to do it themselves that is, they need to compete on capitalistic grounds as well, there is no way around it.
replies(1): >>0xDEAF+1a4
2. 0xDEAF+1a4[view] [source] 2023-11-21 01:53:14
>>dkarra+(OP)
>There are and will be companies (like Anthropic) doing the same work themselves

My understanding is that Anthropic is even more idealistic than OpenAI. I think it was founded by a bunch of OpenAI people who quit because they felt OpenAI wasn't cautious enough.

In any case, ultimately it depends on the industry structure. If there are just a few big players, and most of them are idealistic, things could go OK. If there are a ton of little players, that's when you risk a race to the bottom, because there will always be someone willing to bend the rules to gain an advantage.

[go to top]