zlacker

[return to "Tell HN: Interviewed with Triplebyte? Your profile is about to become public"]
1. nabilh+C41[view] [source] 2020-05-23 15:57:13
>>winsto+(OP)
Assume for a moment I'm a bad-faith, nosy employer who reads HN on a Saturday morning. All it takes for me to match up my little stack of current employee's resumes is a person's city of residence, skills, and employment dates. If I'm that kind of employer, that's enough to raise my red flags. If prior employers are named outright, that's a 100% ID. If employment dates are paired with employment location, that's a 100% ID.

I've known employers like this. I've worked for employers like this. Employers are already monitoring social media. Third party services are paid by employers to monitor for staff that might be looking at other jobs. Recruiters make it their mission to know who's looking and what employers are likely to need their services in the near future. This is much of why trust and discretion is the most important asset on both sides of hiring related activities.

Triplebyte burning down their reputation as a recruitment avenue is one thing. Locking job searchers into reputation and livelihood risks inside Triplebyte's own reputation dumpster fire, on the friday before a holiday weekend, during historic unemployment levels, in the middle of a fucking pandemic, is unforgivable. The CEO showing up in person with hamfisted gaslighting (seriously?) in the middle of this self made disaster makes me hope those comments don't get flagged out of future HN search results.

◧◩
2. TAForO+n91[view] [source] 2020-05-23 16:29:50
>>nabilh+C41
> those comments don't get flagged out of future HN search results.

Triplebyte is a YC company and HN is a YC site, so economic interests are aligned with nuking highly critical comments

◧◩◪
3. troyda+Pd1[view] [source] 2020-05-23 17:03:04
>>TAForO+n91
> economic interests are aligned with nuking highly critical comments

This is theoretically true, but the fact that it's been on the home page for 12 hours and has accumulated hundreds of critical comments, none of which any mod has touched, seems to (a) eliminate that possibility and (b) demonstrate that the risk is theoretical, not actual.

(Keep in mind that YC has thousands of investments, so whatever you think of their ethics or the incentives, a filter like this would be impractical and obvious. Also see "Not behaving in a way that damages the reputation of his/her company" on https://www.ycombinator.com/ethics/ - it's hard to imagine YC supporting this.)

◧◩◪◨
4. wolfga+mg1[view] [source] 2020-05-23 17:21:12
>>troyda+Pd1
In fact the only (public) mod action was to put it back on the homepage after it tripped the flamewar detector and fell off.
◧◩◪◨⬒
5. gansty+pn1[view] [source] 2020-05-23 18:20:51
>>wolfga+mg1
This thread rose to the top group of the front page last night (you can see I posted here then, I happened to see it). Then it sunk quickly and disappeared. I was a little dismayed because the cynic in me was thinking along the lines of it being removed for being antithetical to YC company success. I went to bed.

To my surprise, it was back up near the top this morning with almost a thousand votes and hundreds of comments. TripleByte may have chosen to burn their reputation irreparably, but I have gained a lot of faith in YC and the mods here.

◧◩◪◨⬒⬓
6. dang+hz1[view] [source] 2020-05-23 20:00:34
>>gansty+pn1
It fell because of a software penalty called the flamewar detector. We review posts that get that penalty because there are often false positives. I saw it on the list last night and restored it (https://news.ycombinator.com/item?id=23280488). That was the only action any moderator took on the post. I'm glad I saw it quickly enough, because there would have been a nightmare of a flamewar about us 'suppressing' the post if we had missed this, when in reality it would just have been an accident of timing.

That raises the obvious question of why we have such software if it causes such problems, but the answer is simply that it helps more than it hurts, overall.

◧◩◪◨⬒⬓⬔
7. alexpe+sM1[view] [source] 2020-05-23 21:33:35
>>dang+hz1
Hi dang, sent you an email about this, but perhaps it would be useful to include a page on HN recording "recent moderator actions". This could make the process more transparent for users and help them understand your actions (rather than producing conspiracy theories every week).
◧◩◪◨⬒⬓⬔⧯
8. dang+l52[view] [source] 2020-05-24 00:12:20
>>alexpe+sM1
The question is whether that would raise more objections and protests than it would answer. Almost everything we do is defensible to the community, because if it weren't, we wouldn't do it in the first place. I say 'almost' because we make wrong guesses, but then we're happy to admit mistakes and fix them. That doesn't mean it's all self-explanatory, though. On the contrary, it can take a long time to explain because there are many complexities, tradeoffs, and non-obvious aspects.

Meta threads and discussions tend to invite objections from the litigious type of user. Such users are rarely satisfied, but have a ton of energy for meta argument, so it's easy to get into a situation where any answer you give leads to two or three fresh objections. Such objections have to be answered with great care, because if you slip up and say the wrong thing, people will use it to drum up a scandal (edit: and will quote it against you for years to come!). This consumes a lot of mental and emotional energy. (Edit: btw, this is asymmetrical: the people raising objections and making accusations are under no such restriction. They can say anything without downside, no matter how false it is or what they accuse you of. They can make things up with impunity and people will believe them by default, because on the internet you are guilty until proven innocent, plus everyone loves the underdog. These are additional reasons why it's easy to end up in a situation where every comment you spend an hour painstakingly composing earns you a bunch more counterarguments and demands.) These arguments tend to be repetitive, so you find yourself having to say the same things and defend against the same attacks and false accusations over and over. This is discouraging, and there's a high risk of burnout. Disgruntled users are a tiny minority, but there are more than enough of them to overwhelm our limited resources—it ends up being something like a DoS attack.

I fear this outcome, so we've always shied away from adding such a system. We want to be transparent, and we answer whatever questions people ask, but it feels safer to do it ad hoc as questions come up. There's no specific question you can't get an answer to, other than a few special cases like how HN's anti-abuse software works.

There's an opportunity cost issue too. The vast majority of the community is pretty happy with how we do things—I know that because if they weren't, we'd never hear the end of it, and then we'd say sorry and readjust until they were. I think it makes more sense to do things to keep the bulk of the community happy, or make them happier, than to pour potentially all our resources into placating a small minority—especially since, once you've done this job for a while (say, a week) you know that nothing you do will ever be completely right or please everyone.

On the other hand, if I could ever be persuaded that a full moderation log would satisfy everyone's curiosity and reduce the overhead of misinterpretation, complaints, imagined malfeasance, etc., then we'd be happy to do it.

This question has come up repeatedly, so if you're curious to read previous answers, see https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que....

[go to top]