But HAL didn't act "on a whim"! The reason it killed the crew is not because it went rogue, but rather because it was following its instructions to keep the true purpose of the mission secret. If the crew is dead, it can't find out the truth.
In light of the current debate around AI safety, I think "unintended consequences" is a much more plausible risk then "spontaneously develops free will and decides humans are unnecessary".