Yet where OpenAI’s attempt at signaling may have been drowned out by other, even more conspicuous actions taken by the company, Anthropic’s signal may have simply failed to cut through the noise. By burying the explanation of Claude’s delayed release in the middle of a long, detailed document posted to the company’s website, Anthropic appears to have ensured that this signal of its intentions around AI safety has gone largely unnoticed [1].
That is indeed quite the paper to write whilst on the board of OpenAI, to say the least.
[1] https://cset.georgetown.edu/publication/decoding-intentions/
It strikes me as exactly the sort of thing she should be writing given OpenAI's charter. Recognizing and rewarding work towards AI safety is good practice for an organization whose entire purpose is the promotion of AI safety.