I can't speak to what the exact split is or what is a part of post training versus pre training at various labs but I am exceedingly confident all labs post train for effectiveness in specific domains.
I claimed that OpenAI overindexed on getting away with aggressive post-training on old pre-training checkpoints. Gemini / Anthropic correctly realized that new pre-training checkpoints need to happen to get the best gains in their latest model releases (which get post-trained too).