Why Data Sharing & Privacy Controversies Aren’t Killing Social Media Platforms

Casey Fiesler
7 min readMar 22, 2018

--

Like? Dislike? The age old question. (Photo CC0)

In 2016, WhatsApp announced that it would be sharing data with Facebook in order to improve Facebook ads and user experience. In 2017, we found out that the email unsubscribe service Unroll.me was selling aggregate data about users’ Lyft receipts to Uber. Now in 2018, investigative reporting has revealed that the data firm Cambridge Analytica (whose methods are widely considered to have been a contributing factor to Donald Trump’s presidential win) made use of data collected from Facebook under false pretenses. And this is not even an exhaustive list of controversies that fall under an umbrella that seems like the new normal: internet platform users finding out that their data is being used in ways that they don’t expect or intend. The result is a lot of bad press and a lot of angry or disillusioned users. But why hasn’t this problem led to them fleeing these platforms?

In a study described in a recent paper forthcoming in CHI 2018, I and co-author Blake Hallinan examined public reactions to the WhatsApp and Unroll.me controversies via an analysis of comments on news articles.* In the paper, we focus on how attitudes towards these data sharing controversies reflect visions of responsibility towards privacy (platform or user?), strategies for privacy protection, and how those two things interact. Much of our findings track to what we know from interview, survey, and lab-based studies of privacy attitudes, in essence validating these concepts “in the wild.”

This work keep coming to mind for me during the current discourse around the Cambridge Analytica/Facebook data sharing controversy. I have seen many calls to action for everyone to just quit Facebook. Shouldn’t this be the final straw?

There has been a lot of great prior work on social media non-use (much focusing on Facebook in particular) that covers motivations for abstention or leaving like addiction, banality, external pressure, and of course, privacy or data misuse. Our study covers the latter specifically, and there are some major themes throughout our paper, along with things we know from prior work, that suggest that it’s not as simply as “just quit Facebook.”

Opportunity cost. Comments in our dataset covered how leaving Facebook meant “miss[ing] out on a trip to the pub” or how uninstalling Whatsapp would mean “becoming an antisocial miserable sod with no friends.” However, it is important to remember that Facebook in particular is more than just a place to post pictures of your kids, invite people to parties, and read news articles. It has become heavily integrated into both social and work lives for many people. Facebook groups, for example, have become a popular (or sometimes only) communication mode for many different kinds of organizations. I strongly encourage my graduate students to be on both Twitter and Facebook; at least in my field, social media can be a powerful tool for meeting people and learning about developments and opportunities.

This means that not using these platforms means lost opportunities — and therefore, that you are both willing and able to bear the opportunity cost. A 2012 paper makes this point well, in that not everyone can bear those costs. I think it’s more important for a grad student to be on social media than a senior faculty member, for example. Also keep in mind that in some developing countries, Facebook practically is the Internet for many people. You can consider all of this as a cost-benefit analysis. Similarly, our paper shows that commenters who were ambivalent about the Unroll.me data selling controversy typically thought (a) that the damage/cost was not significant; and (b) they liked the service enough to bear that cost.

Perceptions of harm and expectation (non)violations. Another explanation for sticking with a platform is that you think that whatever is happening isn’t that bad. Some of the commentators around Unroll.me, for example, said things like “my data is pretty boring anyway.” In other words, what harm could come from selling it? Why should I care if Uber knows about my Lyft receipts? This is something I hear a lot — why would anyone care about my photos of coffee and cats? To some extent, this might be right, and to some extent, this represents simply not knowing how data can be used. The Cambridge Analytica story is a great example; in the original paper that inspired the data collection, the authors warned that “the predictability of individual attributes from digital records of behavior may have considerable negative implications,” including threats to “well-being, freedom, or even life.” Your data on its own might be boring, but often it can be combined in ways you might find troubling. And if you don’t know all the ways your data can be used, you can’t object to it.

The reaction to the Cambridge Analytica scandal, for example, shows that it does matter how your data is used. Some uses are expected, and some aren’t. Our paper tackles the concept of “you are the product,” a phrase that appeared so often in our data that there were also multiple threads addressing the origins or meanings of the concept. Commenters typically used this concept in a “well, duh” fashion, expressing disbelief that anyone could possibly think companies aren’t monetizing their data: e.g., “no such thing as a free lunch.” These comments also relayed the theme of user responsibility for their own privacy; if you don’t realize that you are the product, and you don’t bother to read the TOS and privacy policy, then anything use of your data you feel unhappy about was your own fault. In contrast, others placed the responsibility on the platform, to be clear and transparent about their policies (since, as we know, TOS can be incomprehensible) — one even described this “you should have read the TOS” attitude to be a kind of “victim blaming.”

This highlights two key differences between the controversies we examined and what happened with Cambridge Analytica, however, that point to expectation violations: (1) You might expect that your data can be used for advertising that benefits a platform but you might not expect your data to be used for more nuanced manipulation and/or for political purposes; and (2) Since the app (at least, as far as I can understand from reports) purported to be collecting data for academic research, the fact that it was used otherwise was in violation of expectations. This wasn’t a “if you’d read the TOS you’d have known” situation; here, it seems users were deliberately deceived. So one reason that other kinds of data sharing controversies haven’t led to users fleeing platforms is that many of them already expected those kinds of uses of their data.

Learned helplessness. Like those who say “well just read the TOS!,” many people may expect that their data can be used in certain ways and will either opt out of the platform or will take measures to protect their privacy. However, others just expect the worst while also resigning themselves to it. Learned helplessness is a concept from psychology (rooted in some really sad experiments with dogs, sadface) that describes a behavioral reaction to “repeatedly painful or otherwise aversive stimuli which [an animal] is unable to escape or avoid.” A 2014 paper applied this to concept to privacy attitudes, and we saw evidence of it in our data as well. There are cases in which someone would like more privacy but have just resigned themselves to a life without it, either because they have poor opinions of the companies or because they just assume that privacy is a dead concept. For example, one commenter resignedly pointed out that they just assume every piece of information they put online will be sold to the highest bidder. And if you assume that this is the case everywhere, not just Facebook, then well, why not use Facebook if you’re not going to have any privacy regardless?

This of course is a non-exhaustive accounting for reasons why people might stick with platforms even if they’re upset by data sharing controversies, but it covers the major themes we saw in our previous study (along with other findings not as closely related to this question).

An obvious follow-up question is: is there actually a problem then, or can these platforms just keep doing whatever they want? I would say that there is still a tipping point. Perhaps most strikingly, considerations for opportunity cost still require a cost-benefit analysis. This balance will shift when: (a) the cost become even higher, or more personal; and/or (b) the benefits become lower due to there being viable alternative options. My ongoing research about platform migration over time suggests that bad policy decisions or design changes can drive people away from a platform — but that there is unlikely to be mass migration prior to a reasonable place for them to go. And part of what makes an alternative social media platform reasonable is for there to be a critical mass of friends moving there. This of course creates a chicken-and-egg problem, which is perhaps why we’ve seen big pushes for Facebook alternatives (e.g., Google+, Ello) largely fail.

In sum, privacy attitudes are nuanced, varied, and intricately tied into the real and perceived benefits of social media use (which are also nuanced and varied). These controversies are important to understand, and it is also important that these platforms do better, both with respect to privacy protections and being transparent about how data is used. And this is especially critical since campaigns to convince everyone to just stop using Facebook are unlikely to be successful on their own.

Citation:

Fiesler, C. & Hallinan, B. (2018) “We Are the Product”: Public Reactions to Online Privacy and Data Sharing Controversies in the Media. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI). Montreal, Canada.

* As an aside from the topic of this article, understanding public opinion and attitudes through news comments is a method we used in another recent paper about grief policing. This method brings up interesting ethical questions about qualitative research on public data (particularly with respect to privacy issues and verbatim quotes). Both papers have ethical considerations sections that lay out some of our thoughts on this.

--

--

Casey Fiesler
Casey Fiesler

Written by Casey Fiesler

Faculty in Information Science at CU Boulder. Technology ethics, social computing, women in tech, science communication. www.caseyfiesler.com

No responses yet