Whether odomojuli's post was written by a human, robot or dog is rather immaterial. It's the content of the post that makes it good or bad. It can be evaluated without knowing the author.
That comment indeed looks a lot like it is generated. It has correlated a bunch of words, but it did not understand that the link between UI and AI is tenuous. It is probably one of the few comments where it is so glaringly obvious. There are likely a lot more comments around which are generated but which went unnoticed.
This comment is not generated, as the links below are dated after the GPT-3 dataset was scraped.
[0] https://news.ycombinator.com/submitted?id=adolos
[1] https://adolos.substack.com/p/what-i-would-do-with-gpt-3-if-...
Web of trust is inevitable either way. I just hope it won't be owned by any huge company. Most likely it will be practically shared by a few of them, like the Internet or web standards.
After playing some with GPT-3 though I noticed I work a lot of like it though. Unrelated comments about web of trust are a great example.
Also, dude, I see you in almost every comment thread, how do you consume HN? Some clever scripts or lots of tabs and lots of refreshing?
It might be confirmation bias (or maybe we just like the same things), if you look at my history I only leave a few comments a day.
Would it be possible to use gpt3 to "beautify" existing prose without changing its meaning? Now that would be useful!
I know that GPT3 is impressive, but I'm not as convinced as some of the other commenters here that it's a generated comment. If a similar comment were posted on a non-AI related post, nobody would bat an eye.
Is there an equivalent to Hanlon's razor for AI? "Don't attribute to an text generation AI, which could be adquately explained by slightly nonsensical speech".
Not me. Not without total attribution so I know it's a bot and whose. There is no AI generated text etc without an agenda - implicit or explicit; benign or sinister.
Advancing the conversation is one thing but when u can't tell between a Russian GRU bot let loose to promote trigger words and some 4chan teenager who had a bad day at school then the online forum as a mode of expression is dead.
A 4channer is entitled to their opinion however wrong. A bot acting like it's human needs to be hunted, killed, and erased.
Ever since the 2016 foreign-meddling-in-the-election news, I've see people commenting that there must be tons of russian bots/shills/astroturfers/etc in comments where I see genuine disagreement. I'm sure there are both, but I suspect the dismissal of "'people/opinions' that disagree with me aren't real people/opinions" is more common than the actual act of fake commenting.
In a way the real story is that people are so eager to believe it that it didn't matter that it was untrue. Like Voltaire's God, if it didn't exist it was necessary to invent it.