It's not surprising as it's very hard to train for or benchmark.
Also should add I don't think anyone serious thinks that long form writing or ideation is what they're for - assuming an LLM would be good at this is a side effect of anthropomorphism / confusion. It doesn't mean an LLM isn't good at summarizing something or changing unstructured data into structured or all of the other "cognitive tasks" that we expect from AI.