zlacker

[parent] [thread] 4 comments
1. nahuel+(OP)[view] [source] 2020-09-24 16:11:13
What do you think about a dystopia were GPT-3 / GPT-4 bots post comments to Hacker News including references and links without being distinguished from real humans?
replies(2): >>082349+E2 >>search+BH
2. 082349+E2[view] [source] 2020-09-24 16:23:28
>>nahuel+(OP)
If indistinguishable, would that be a dystopia or a utopia[1]? At least this is Hacker News, not /r/totallynotrobots. Maybe if we gaze long enough into a procedural abyss, the abyss will gaze back?

https://xkcd.com/810/

Bonus clip: https://www.youtube.com/watch?v=DH76CZbqoqI

[1] if the line between dys- and u-topia depends upon prevalence of man-portable SIGINT devices: https://news.ycombinator.com/item?id=24069572

Further playing "bot or not?" we have the Stasi (human) vs NSA (automated): https://news.ycombinator.com/item?id=24470017

replies(1): >>nahuel+S4
◧◩
3. nahuel+S4[view] [source] [discussion] 2020-09-24 16:32:22
>>082349+E2
I think we need to wait to the next iteration to get a totally indistinguishable one, but very impressive still ;)
4. search+BH[view] [source] 2020-09-24 19:52:00
>>nahuel+(OP)
The key to making a bot indistinguishable is to mix patterns in it along with machine generation.

For example, simply providing an alternative to pay walled article is a recurring task people do here. It's easy to automate and doesn't raise eye brows. It raises someone's perception of the profile if they were to do a quick check. Another one include providing alternative to products. It's easy. Search through product hunt or other sites for results or wishing someone on their product launch/show HN which again doesn't require contextual understanding to the same degree.

Big tech, philosophical, news media, etc threads are predictable. T5 and electra models from Google are good at filling the blanks (in contrast to gpt which generates texts in forward fashion) so they can be used to make unique sentences following a pattern. They are more meaningful at the cost of less randomness.

Many posts on HN appear first on lobster, small subreddits, GitHub trending, and popular twitter accounts. You could simply fetch the links at a random interval within a timezone and post unique links here.

You can target a demography who is least likely to suspect it's a bot. HN is siloed in many small parts despite having the same front page. You can predict which users are likely to post in certain threads and what their age demography is i.e Emacs anything. Database of HN is available on big query.

You can train a response to suspicious comment calling them a bot: That hurts. I am not a native English speaker. Sorry, if I offended you. or Please check the guidelines...

There are many techniques to make a sophisticated bot. ;)

https://ai.googleblog.com/2020/02/exploring-transfer-learnin...

https://github.com/fuzhenxin/Style-Transfer-in-Text

https://ai.googleblog.com/2020/03/more-efficient-nlp-model-p...

https://console.cloud.google.com/marketplace/details/y-combi...

It wouldn't surprise me if a non significant number of users here were bots.

I am more interested in the question: Does the difference matter especially in text as long as a bot user is a more useful user?

replies(1): >>bryan_+QN1
◧◩
5. bryan_+QN1[view] [source] [discussion] 2020-09-25 06:12:05
>>search+BH
So you've noticed "communism Sundays" around here too eh?

I don't know where to go to meaningfully engage with humans anymore. It's just smarter and smarter bots

[go to top]