For example, I can post a comment decrying Blub with a snide remark (e.g., "You wrote a 1,000 line Blub program? Was it 500 getters and 500 setters?" in a thread discussing software projects) that is both information free and mean (perhaps Blub wasn't the author's preferred choice, but chosen for him or required in order to build an application for the iBlubber). People on this site generally dislike Blub, so the comment will get upvotes without adding any value to the discussion (an example of adding value would be saying you were able to do this in 100 lines of Flub using its cool new hygienic macros with a link to a paper on hygienic macros in Flub).
That's not to say all comment score data should be gone. Comment scores can still be kept and comments could be displayed on stories in the other in which they're displayed now (a mix of comment score and how recently it was posted). Generally, what I've found is that comments showing up _first_ tend to be of higher quality i.e., overall algorithm works more often than not.
[NB: I work at LinkedIn and we do this for connection counts-- we want users to network with each other, but we don't want to make it a "who has the most connections" game, that's why when you have over 500 connections (which is perfectly legitimate and allowed), only "500+" is displayed as the count on your profile]
Users should live or die by their votes on that comment. If you vote up the blub comment, you should personally get the downvotes for it too. Upvotes should expose you to the karmic downside of superficial comments.
Especially because the really good comments, the ones most deserving of upvotes, don't seem to get a lot of downvotes; watch the scores on a 'patio11 comment closely sometime to see an example.
This is similar to an idea I was toying with a while back but never got around to nailing down. Upvoting or downvoting an item should result in a change on the personal account, but rather than it changing the account's "score" up or down, the system records the vote based on the "type" of item. If a series of items are categorized as "Gossip" and I vote them down, the system learns that I don't like "Gossip" items. An item could be in multiple categories ("Gossip" and "IT") and my past voting would determine whether or not I would see the item on the page (super-roughly "the item was categorized 50/50 Gossip/IT, dpk's Gossip score is -11 and IT score is 10, result is -1, don't show"). In effect, users would themselves be group-able by their votes, so if someone in your "group" posted an item, it would get an automatic bonus. If someone in your "anti-group" (someone nearly diametrically opposed to you) posted something it would get a negative bonus.
Categorization would need to be put to the community, and would be done while the item is on what is currently termed the "new" page. Once the item is categorized the various display scores (as loosely described above) will be computed and the result could be shown to the users.
One side-effect of this is that it allows users to "shun" spammers in to their own "group". They could spam all they want but nobody would see it unless they were really excited about seeing spam.
The idea has (at least) one major serious problem: It encourages group-isolation. This could be partially resolved by always showing a "best of group" block of links somewhere on the home page, which could encourage users to branch out a bit.
As I said, this idea is not fully fleshed out, hence the overuse of quotes and its nebulous, hand-wavy nature. It's a less punitive, more categorical system. It may not scale at all.