All due respect to Jan here, though. He's being (perhaps dangerously) honest, genuinely believes in AI safety, and is an actual research expert, unlike me.
Care to explain? Absurd how? An internal contradiction somehow? Unimportant for some reason? Impossible for some reason?