Within the next 10 years, and maybe much sooner, the vast majority of content on FB/Twitter/Reddit/LinkedIn will be completely fake. The "people" on those networks will be fake as well. Sure there are bots today, but they're not nearly as good as what I'm talking about, and they don't exist at the same scale. Once that happens, the value of those networks will rapidly deteriorate as people will seek out more authentic experiences with real people.
IMO, there's a multibillion dollar company waiting to be founded to provide authenticity verification services for humans online.
If I can interact with bots that emulate humans with such a degree of realism, what do I care? You could be a bot, the whole of HN can be bots, I don't really care who wrote the text if I can get something from it, I mean I don't have any idea who you are and don't even read usernames when reading posts here on HN.
At its core this seems like a moderation issue, if someone writes bots that just post low quality nonsense, ban them, but if bots are just wrong or not super eloquent, I can point you to reddit and twitter right now and you can see a lot of those low quality nonsense, all posted by actual humans. In fact you can go outside and speak to real people and most of it is nonsense (me included).
It seems like crowd-sourced moderation is probably the only thing that will work at scale. I've always wondered why Reddit doesn't rank comments by default according to someone's overall reputation inside of a subreddit and then by the relative merits of the comment on a particular subject. Getting the weighting right would be hard, but it seems like that would be the best way to dissuade low quality comments and outright trolling.