The truly irritating thing is that even that wouldn't necessarily be enough, because so many sites actually do live A/B/C/[n] title tests simultaneously to randomized sets of users then choose whichever one gets the most clicks or whatever metric first. Even without any manual shenanigans. So there's a window where merely refreshing or browsing from a different IP will yield a different title. Sometimes evidence is left in the URL or interactions with older systems on the a site but that's all baroque. So so so many edge cases in grabbing titles.
Probably not worth the effort on HN to try to automate vs just treating it case by case. It doesn't usually seem to be a problem. "Pre-optimization is the root of all evil" and all that.
Edit: or archive.org as dang says, but I don't know if even they see all versions of a page if there is a simultest situation. Regrettably seems pretty SOP on even reputable places.