> We use the combination of your Facebook and LinkedIn data plus your About Me and Photos to ensure we are building a balanced, high-achieving and diverse community. Our screening algorithm looks at indicators like social influence, education, profession, industry, friends in The League, number of referrals you've made to your network, as well as supplemental data like what groups you belong to, events you've attended, interests you list, and preferences.
Absolutely terrifying.
It makes me wonder how many more things I'll never get to participate in because I've deleted/avoid social media.
I'm fairly certain that if a person is highly active on social media such a system could produce a better diagnosis than most people get when they see a professional, if only because the quality of psychodiagnosis is poor since it is often seen as a scam to satisfy insurance bureaucrats, common conditions are never diagnosed, there are fads for certain rare conditions, etc.
And, at least here, they contained last name and one letter of the first name. No information on gender/interests/articles read/ads clicked/locations visited/family/friends/devices used/apps installed/items bought/...
.. then the diagnosis of one of their problems sounds quite trivial.
This is what we're asking for. I am refusing to divulge information about me I don't want to share. Other people are building whatever on top of that data. I can hardly complain about lack of inclusion when I am the one refusing to feed their robots.
If you want people at Cheers to know your name, you... have to tell them your name. I'm fine being anonymous. It sounds like maybe you're more conflicted.
1. "could" produce a better diagnosis. Not guaranteed. And better than what? How likely is it to really deliver a better result than appropriately trained specialists? 2. "scam to satisfy insurance bureaucrats". And you doing it digitally won't find its way to unintended recipients?
The undercurrent of this thread - and the original post - is growing awareness of the dystopian disaster that has grown out of "free" social media. So it's not surprising - to me, at any rate - that the general sentiment here is to be suspicious of any adjacent use.
What makes you so sure? (This is a serious question, not rhetorical.)
What I don't understand though is why do I also need to share my browsing history with faceless american corps that sell my data for profit. This sounds unnecessary for the main point (psychodiagnostic software).
Imagine how your tech could be used for evil and how profitable that would be. It could be a 2nd or even 3rd order effect, even.
[1] Film focuses on a college team building something they think is cool but really is a key part of a weapons system.
I'm not in the industry but I am very curious to know if we're already in the conditional-execution phase of surveillance/ad-serving/profile-updating: is there an idea [yet] of serving a challenge, and then both recording how/if it is engaged, with automated graph traversal to "look closer"... all offered stochastically...
The simple way to put that in part is, are we now getting A/B tests run on us explicitly, rather than merely implicitly?
(Personally, I'm 100% off Meta products and TikTok—but am leaking through LinkedIn and, regrettably, Google...)
Myself I have a condition which 5-10% of people have. As a child, I had two very high quality psych evaluations for the time where people observed all the signs and symptoms (particularly the first one) but failed to draw a line between them.
Since then I saw therapists maybe 6 times in 30 years (sometimes the same one) and it was always “adjustment disorder with …” and there was some truth in that in that in each case I had some very ordinary kind of stress which was exacerbating my condition but in reality there was always a chronic aspect to that.
I’ve known numerous people who have severe mental illness (way worse than the quirk that got me kicked out of elementary school) and contact with the psychiatric system and never got a conclusive diagnosis. The first line for a lot of people is to see a primary care practitioner and get diagnosed with either “anxiety” or “depression” and prescribe the same medication in either case. A referral to an actual psychiatric nurse practitioner who is taking patients is almost impossible in 2023 in the US never mind an actual psychiatrist.
I've been thinking about buying a new car, but I'm very aware of how much tracking/telematics they include nowadays... so I decided to search "$manufacturer disable telematics". Every single thread I found was full of people saying variants of "Why do you even want to do that lol" and "Looks like somebody is doing something illegal".
Every time I see stuff like that, I'm tempted to jump in and share a plethora of examples about how tech companies misuse your data, don't protect it properly, sell it to all sorts of dubious actors, and, most importantly, use it for advertising - which I consider to be nothing more than gaslighting to get you to buy stuff and absolutely despicable.
I have to stop myself because I know I wouldn't get through to them, and I would probably sound crazy.
But it's the best antidote to FOMO, and so it's central theme "In praise of the unlived life" is worth a mention; There's a lot of shit you'll be glad you missed out on, but felt cheated at the time...
That bullet that whizzed past your head... you missed out on.
That plane you missed... that crashed... you missed out on.
That medication they wouldn't give you ... that turned out to have lethal side effect...
These are silly examples compared to the sumptuous theme Phillips develops about how so much of our whole of lives is a set of misplaced expectations and values that are given to us by others but rarely check out in the long term. It's a very affirming to get beyond confirmation/survivor bias and retrospective rose-tinted goggles.
Being "excluded" from a group of people who are the sort who would give their details to BigTech social networks may turn out to be a blessing in ways you can't see yet.
[edit: moved, sorry I replied to wrong comment]
OKCupid is actually a site some people reported as being the "better kind" of dating site, because they're geared toward successful LTR rather than hookup. The dating space is actually full of different interaction and match models that sometimes people don't seem to understand.
Some of the issues around risk, identity and power asymmetry are covered here [0]
In an easily searchable database?
Sooo tempted to go Goodwin here and mention a nice use of computers from the late 1930s...
Even if you’ve never had an account on social media, chances are Facebook, LinkedIn, etc. know your name, email address, age, social graph, etc. because other people have shared their address book with them. Other users also might have tagged photos with your name, after which those sites concluded “that must be the same officeplant that’s in their address book”.
I expect LinkedIn to suggest people to connect because you’re their mutual friend, for example.
I'm trying to simply that with an ear for contradiction;
If P; the more group A lose -> if NOT P; the more group NOT A lose. For P -> L = some loss of privacy
(Okay it's late and I'm clutching at it a little, but something doesn't ring true)
It seems like a formulation of "network effect" on the surface. But if P => L it can't be the same L on the right hand side, no? For the group who are the exclusion of A, their L has to be a gain. Or they are not playing the game well/optimally,
but i don't understand how personalized ads are harmful. if you don't like the product, just don't buy it? what am i missing?
personally, i only buy products that I really want or really need, so if an ad pops up that convinces me to buy, then it's done me a huge favor. but this almost never ever happens. usually, the ads are terribly targetted and don't show any clue of understanding who I am as a person. to me, it seems the problem is they're not targetted enough, rather than too targetted.
I use LinkedIn. I haven't used it in years, now I'm back because that's where the headhunters are and where I can probably find a job. After I'll find new job, I'll switch to zombie mode again and won't use it until I need it again.
So yeah, the reason I use LinkedIn is to not miss a job offer. I don't have a reason to use FB thought.
Seriously, there's nothing wrong with sounding crazy. I mean look at the world. What do you have to lose?
It "sure would help a lot" to go to such a place? Because you're constantly being bothered by total strangers at rates far in excess of the average? Because the first people police interview as murder suspects is everybody who doesn't know the victim? No my friend.
Of course now you can give out your name to total strangers many miles away, with a degree of efficiency undreamt-of in the 80s, yet not even have any fun times spent drinking with those people, so...
So the choice to act is not as free as you describe.
You seem to think only about cases where personalized ads are used for products but the most harm is when people use this to influence groups. the same way they personalize an ad for a product that seems to be the perfect fit in your current situation the same mechanism/algorithm can personalize a message in a way that will influence you just a bit. and then tomorrow another small bit and so you find yourself (a general self not you) hating groups of people you never encountered so far.
Intelligence or IQ or whatever rational high points you have will not protect you from this over a long period of exposure.
Only in the sense that I'm mad that it's hard to get any good new technology that isn't a privacy nightmare.
I see Cool App #354 and think it looks fun to use, but I am only allowed to use it if I give up my privacy. Since I don't want to do that, Cool App #354, which doesn't need any (or at least all) of that data to do the functions I like, is something I can only watch friends use.
The famous example I remember from growing up was a teen girl whose parents found out she was pregnant from a personalized (mailed) Target ad: https://www.forbes.com/sites/kashmirhill/2012/02/16/how-targ... . There seem to be some skepticism in later articles that this is actually how her parents found out, but only because she told them first. They could have found out from the ad.
https://www.liebertpub.com/doi/full/10.1089/big.2017.0074 is a more detailed study of how Facebook likes can out people. It looks like the "cloaking" solution that the authors propose actually makes the model more accurate. From the article "false-positive inferences are significantly easier to cloak than true-positive inferences".
If you're the only one who knows what ads you see, that might still be okay, but if a platform can make these kinds of inferences to show you ads, they can use the same data in other ways. At the very least, they might leak this information to other users by recommending people you may know, etc. You might also reveal what kind of personal ads you get if you ever browse the web someplace where other people can glance at your screen.
I'm not talking about the information they ask me to provide. That's a drop in the bucket and is also under my control to disclose or not. I'm talking about all the other shit apps hoover up without permission.
Or, if you could, would you mind rewriting it in english, please?
you wouldn't believe how irrelevant to me, the ads i get are.
https://en.wikipedia.org/wiki/Rosenhan_experiment
This one is more positive but is checking that different diagnosticians get the same answer
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5980511/
and if that was applied to the "Thud" experiment you'd have poor diagnosis with a very high kappa (interrater agreement)