If you truly believed that OpenAI had an ethical duty to pioneer AGI to ensure its safety, and felt like Altman was lying to the board and jeopardizing its mission as he sent it chasing market opportunities and other venture capital games, you might fire him to make sure you got back on track.
"We can still push on large language models quite a lot, and we will do that": this sounds like continuing working on scaling LLMs.
"We need another breakthrough. [...] pushing hard with language models won't result in AGI.": this sounds like Sam Altman wants to do additional research into different directions, which in my opinion does make sense.
So, altogether, your quotes suggest that Sam Altman wants to continue working on scaling LLMs for the short and middle term and parallely do research into different approaches that might lead to another step towards AGI. I cannot see how this planning could infuriate Ilya Sutskever.