Alan Turing's paper was quite forward thinking. At the time, most people did not yet consider men and women to be equal (let alone homosexuals).
I don't think it is so important whether a comment is written by a man, a woman, a child, or a <machine>, or some combination thereof. What is important is that the comment stands on its own, and has merit.
Pseudonyms(accounts) do have a role to play here. On HN, an account can accrue reputation based on whether their past comments were good or bad. This can help rapidly filter out certain kinds of edge cases and/or bad actors.
A Minimum Required Change to policy might be: Accounts who regularly make false/incorrect comments may need to be downvoted/banned (more) aggressively, where previously we simply assumed they were making mistakes in good faith.
This is not to catch out bots per-se, but rather to deal directly with new failure modes that they introduce. This particular approach also happens to be more powerful: it immediately deals with meatpuppets and other ancillary downsides.
We're currently having a bit of a revolution in AI going on. And we might come up with better ideas over time too. Possibly we need to revisit our position and adjust every 6 months; or even every 3 months.
[1] https://academic.oup.com/mind/article/LIX/236/433/986238?log...
These days of course we use such things as IRC clients, Discord, Web Browsers etc, instead of teletypes. If you substitute in these modern technologies, the Imitation Game still applies to much online interaction today.
I've often applied the lessons gleaned from this to my own online interactions with other people. I don't think I ever quite imagined it might start applying directly to <machines>!
This feels wrong for some reasons. A generalized knowledge that AI can express may be useful. But if it makes things up convincingly, the result that someone may follow its line of thought may be worse for them? With all shit humans say, it’s their real human experience formulated through a prism of their mood, intelligence and other states and characteristics. It’s a reflection of a real world somewhere. AI statements in this sense are minced realities cooked into something that may only look like a solid one. Maybe for some communities it would be irrelevant because participants are expected to judge logically and to check all facts, but it would require to keep awareness at all times.
By “real human” I don’t mean that they are better (or worse) in a discussion, only that I am a human too, a real experience is applicable to me in principle and I could meet it irl. AI’s experience applicability has yet to be proven, if it makes sense at all.
So as far as the spectrum of things moderation needs to deal with goes, AI contribution to discussions doesn't seem to be the worst of problems, and it doesn't seem like it would be completely unmanageable.
But while AI may not be an unmitigated disaster, you are quite correct that unsupervised AI currently might not be an unmitigated boon yet either.
Currently if one does want to use an AI to help participate in discussions, I'd recommend one keep a very close eye on it to make sure the activity remains constructive. This seems like common courtesy and common sense at this time. (And accounts who act unwisely should be sanctioned.)
How is this different than folks getting convinced by "media" people that mass shootings didn't happen, that 9/11 was an inside job or similar?