I don't think he meant it that way. He was well aware he didn't have all the answers. What I believe he was talking about was not the answers but the questions: which ones are people spending their time on? I think he's saying that the questions that most people in AI are spending their time on are not going to give us strong AI. Is that such a controversial claim? I expect most people in the field would agree with it.
Here is something he said to me in April 2009 in a discussion about educational software for the OLPC:
Marvin Minsky: "I've been unsuccessful at getting support for a major project to build the architecture proposed in "The Emotion Machine." The idea is to make an AI that can use multiple methods and commonsense knowledge--so that whenever it gets stuck, it can try another approach. The trouble is that most funding has come under the control of statistical and logical practitioners, or people who think we need to solve low-level problems before we can deal with human-level ones."
Maybe (I'll venture a wild guess) it's just that investing in statistical AI research currently makes more financial sense for the goals of the advertising industry that's funding most of the research these days... You're the product, and all that.