Hi everyone, we developed a tool that can easily tell you the overall sentiment of a message based on a word. For now it’s hacker news only but we think this thing has potential.
Whether you’re a startup, solopreneur or product manager, you can track trends with it. We are also planning to add predictive tools and real time analysis. Operationally this tool is a lot cheaper than Sprout Social or other similar solutions on the market.
No sign-up required. Just type and see results.
I'd love your feedback on the tool's usefulness and any ideas for improvement.
Thanks for sharing :) TIL sentiment for “communism” is slightly less negative than for “capitalism” on here! Tho both are, surprisingly, mostly neutral.
Prior to this I’ve mostly only seen it on dribbble.
I actually like this style a lot, and I wish more apps would use it. But at this point I thought that this style was one that “came and went” before it saw any significant actual use in any apps or OSes. Maybe there is still hope after all :)
Edit: oh and I had to try asking your tool for sentiment about neumorphic design after this of course. It returned my own comment lol :p and it called it “neutral”. Is it only evaluating the first paragraph that the word appears in in the comment? (Also I guess other people more commonly refer to it as “neumorphism” than as “neumorphic design” and maybe that’s why when I asked it for neumorphic design it returned my own comment.)
Note: I would suggest just removing dark mode for now. Works WAY better in light mode. I almost missed the light mode, and that would have been too bad.
Here's my user test: https://news.pub/?try=https://www.youtube.com/embed/2eac5XZe...
Honestly makes me pretty happy you called out the theme. I've always enjoyed this style of design and was sad to see that it never picked up steam. I love how it seems to combine a digital Material design with a more physical and real feeling. I'm doing my part to bring it back.
The tool definitely has some kinks in it that we have plans to iron out over time; we just wanted to get it in front of people to see if anybody would even like it. Right now it's just grabbing the first 256 tokens and categorizing on that, and it grabs the first 5000 comments (split over 5 calls) over the past month.
nothing
>>hobos in san francisco
nothing
>>accommodating my neurodivergence
error
>>is everyone I don't like hitler or just some people
nothing
Took me a minute to figure out what had happened, but I was able to submit a phrase. The response was
> Randomly sampled comment from the pulled data: Sorry, there was an error processing your message.
We’re actively working on making the tool more user-friendly and intuitive. As expected, most sentiment is neutral, but we plan to add a toggle view in future updates to enhance the experience.
2. The graphs now go full screen when they are clicked
3. Most comments will be neutral. This is kind of a quantitative way to see "the silent majority"
There's some nuance that is lost due to it classifying the comment as a whole as opposed to specific aspects of it, but it works for the most part. Thanks for trying it out!
For example if you search for bitwarden it ranks three comments as negative, all others as neutral. If I as a human look at actual comments about bitwarden [1] there are lots of comments about people using it and recommending it. As a human I would rate the sentiment as very positive, with some "negative" comments in between (that are really about specific situations where it's the wrong tool).
I've had some success using LLMs for sentiment analysis. An LLM can understand context and determine that in the given context "Bitwarden is the answer" is a glowing recommendation, not a neutral statement. But doing sentiment analysis that way eats a lot of resources, so I can't fault this tool for going with the more established approach that is incapable of making that leap.
1: https://hn.algolia.com/?dateRange=pastMonth&page=0&prefix=tr...
Also, you could drastically improve styling on mobile. Lot of wasted space.
I tried "neumorphic design" based on the comment this replies to. It is classified as neutral.
Still better than making everything flat without shadows and making me guess where I can click, I guess.
1: https://www.alamy.com/stock-photo-finger-pressing-the-button...
2: https://www.alamy.com/close-up-of-clothes-washing-machine-bu...
Edit: just checked, this comment was analyzed as "Sentiment: neutral (Confidence: 79.56%)" on the topic of "neumorphic"
Addressing some points you made in the video:
1. Yes, this is definitely a V0.1. Thank you for taking that into account :D
2. Interactive graphs (custom colors, intensity, what trends to show, etc.) were something we thought might be useful but wanted to hear what others had to say before we invest the time into a feature nobody wanted
3. That super slow "Searching..." loading message should be showing updates for when it's scraping comments, when it's classifying, etc. Not sure why it's not. Some searches like "open source" are more likely to have a ton of comments to pull so it'll take longer.
4. Default mode changed to light mode because it looks better :^)
https://www.nngroup.com/articles/skeuomorphism/
https://www.nngroup.com/articles/flat-design/
>Neumorphism never quite made it mainstream because it comes with its own set of problems. The low contrast does not offer sufficient visual weight, making the experience not accessible. Additionally, it is difficult to determine clickability, as neumorphism is often used inconsistently on nonclickable and clickable elements.
Don't get me wrong, I still like the design and I think it's cool, but I understand the reasons why it never got popular.
As for mobile, I am but a humble backend dev, but I agree completely. Will put it on the roadmap. Thank you for your feedback!
> you can track trends with it
No, no, you can't.
Still, great idea and execution
> Ah yes, because blockchain is the 100% true source of ultimate truth.
The model can't detect sarcasm.
For example, while testing it on "Founder Mode" there were a couple comments that mentioned something like "I hate founder mode but I really really like this other thing that is the opposite of founder mode..." and then just continues for a couple paragraphs. It classified the comment as positive. While _technically_ true, that wasn't quite the intention.
We think there are some ways around this that can increase the fidelity of these models that won't involve using generative AI. Like you said, doing it that way eats a ton of resources.
So that we could compare terms based on this result metric: google vs microsoft, rust vs go, rust vs microsoft, etc
(Will not work for Go as it's a common word in addition to the programming language, but anyways)
Also, it seems like putting in the same phrase twice generates different graphs and results at least sometimes. So it’s difficult to use comparatively.
No, I don't think it is.
I tried "remote work" like the initial instructions recommended as an example. The graph it gave me showed large spikes of "neutral" sentiment with a few negligible bouts of negative sentiment and even smaller bouts of positive sentiment. The sample comment it gave was from a "Who Wants to be Hired" post where the poster demanded exclusively remote offers, which the tool classified as "neutral" (with 98.7% confidence!)
Very slick tool, but if the sentiment analysis itself doesn't really work well then I don't see what value this could have.
Bug report, I saw inaccurate results, I asked about “native apps” and I got negative sentiment. This is contrary to my experience, afaik HN loves them.
The example comment[1] quoted “non-native apps” and is part of the discussion where people say they don’t like non-native apps.
Edit: Then I asked about non-native apps, got sentiment “neutral” and this comment (the one I’m editing now) as the example. Very unexpected!
[1]: >>41366882
I don’t think it ever gained traction, probably because people aren’t interested in creating an actual theory of sentiment that matches the real world.
[1]: https://github.com/clips/pattern/wiki/pattern-en#sentiment
I searched "apache" and the "randomly sampled comment" was a mystery:
> I think the author has a point with one-way doors slowing down the adoption of distributed systems. ...
I had to search google with that phrase to get the actual context. ( >>41363836 )
Which turned out to be about "Apache Beam" not the Http server.
If you remove the dark-gray border from the buttons it's even better!
> this comment was analyzed as "Sentiment: neutral (Confidence: 79.56%)"
I wonder what kinds of heinous things you'd have to write for it to be negative...
Almost comical that this comment is not analyzed as negative.
That's an interesting example because when I read it it sounds to me like something slightly positive, or at least, unlikely to be negative. Because if you had a negative opinion of Bitwarden, you probably wouldn't be storing stuff in it.
Coffee. We apparently hate "coffee" with a 79.99% confidence. Unless you ask it again, in which case we like it with a 98.67% confidence. And if you ask it again, it's 98.86% certain that we're neutral on the topic.
Same with "spam". Sometimes we like it, sometimes we don't. I guess it's bad in emails and good on musubis? Shrug.
https://www.cloudwisp.com/exploring-visual-basic-1-0-for-ms-...
That's completely changed in the last 18 months. All my colleagues in the industry have switched to LLMs. They're seeing accuracy as good as hand coding was getting (these were generally college educated coders), at scale.
Non-LLM sentiment tools were always a bit of a parlor trick that required cherry picking to survive the demo. In almost every case drilling to the actual statements revealed it was wrong on anything involving irony, humor, or even complex grammar. That's changed almost overnight.
I think the finding that hn is "neutral" about MBAs says all that's needed about accuracy here.