What were you hoping to achieve by doing that?
If things are set to a really long time, >=12 hours, you find out the next day when everything is broken (or you get alerts in the middle of the night). If you set them to a randomized 15-90m span, you get things breaking immediately when you screw up the dhcp server.
It's just been a couple of times, but I've definitely done it (e.g. bridged a couple of networks that shouldn't have been).
But mostly, it's the other two things: it provides me with a list of hosts active now, and if the DHCP server is subtly broken I get a sentinel signal of something being wrong earlier (and it tends to be a partial instead of complete failure).
One more bonus: if I move something to a static lease, out of the pool, it'll renumber in a reasonable time and I don't need to go kick link state to get it to request again.
Things like really big caches and really long lease times: They're good for average performance, and they can let you ride out small problems. The flip side is that they tend to mask problems and to create really big demand transients at times. The trick is always to find a good middle ground.
Extremes of any sort generally aren't good. You fucked around and you found that out.