Using ML to provide feedback is a bad idea. Most ML techniques latch on to surface features of the text rather than the deeper structure, so it'd just make it really easy for people to reword their mean comments ("this is just stupid" becomes "What an incoherent piece of gobblydegook" or something like this, which might make things funnier but I doubt it would help).
1) who votes on good comments 2) who votes on whose comments 3) who votes a lot / a little.
But mostly (1).
I take your point about ML being superficial. But if it's being used at all, shouldn't the users be informed about what the robo-brain thinks of them?
Your excellent example of a rewording might fool a lot of humans too (see pg's article another commenter linked to ... Ctrl+F "DH4").