For example, I usually come off as being relatively skeptic within the HN crowd, but I'm actually pushing for more usage at work. This kind of "opinion arbitrage" is common with new technologies.
Tends to be like that with subjects once feelings get involved. Make any skepticism public, even if you don't feel strongly either way, and you get one side of extremists yelling at you about X. At the same time, say anything positive and you get the zealots from the other side yelling at you about Y.
Us who tend to be not so extremist gets push back from both sides, either in the same conversations or in different places, while both see you as belonging to "the other side" while in reality you're just trying to take a somewhat balanced approach.
These "us vs them" never made sense to me, for (almost) any topic. Truth usually sits somewhere around the middle, and a balanced approach seems to usually result in more benefits overall, at least personally for me.
Imo it does, because it frames the underlying assumptions around your comment. Ie there was some very pro-AI folks who think it's not just going to replace everything, but already is. That's an extreme example of course.
I view it as valuable anytime there's extreme hype, party lines, etc. If you don't frame it yourself, others will and can misunderstand your comment when viewed through the wrong lens.
Not a big deal of course, but neither is putting a qualifier on a comment.
These things are also not binary, they're a full grid of space.
Huh? The recipe how to be in this position is literally in the readme of the linked project. You don’t even have to believe it, you just have to work it.
I think that's just the opinion of someone who doesn't think AI currently lives up to the hype but is optimistic about developing it further, not really that weird of a position in my opinion.
Personally I'm moving more into the "I think AI is good enough to replace humans, and I am against" category.