IME most folks at Anthropic, OpenAI or whatever that are freaking out about things never defined the problem well and typically were engaging with highly theoretical models as opposed to the real capabilities of a well-defined, accomplished (or clearly accomplishable) system. It was too triggering for me to consider roles there in the past given that these were typically the folks I knew working there.
Sam may have added a lot of groundedness, but idk ofc bc I wasn’t there.