The Facebook study was ethical. But is Facebook itself ethical?

Facebook carried out a study in which they manipulated the emotions of FB participants. Many have questioned whether this was ethical research. It’s worth looking at this, because it touches on fundamental questions of research ethics.

Here’s what Facebook did. A critical background fact is that Facebook filters what it shows you in the News Feed, based on a proprietary algorithm. The experimenters manipulated the algorithm in the News Feeds of more than 600,000 Facebook members. They selectively filtered the posts that people saw on their News Feeds to either decrease the positive emotional content or to decrease the negative emotional content of the News Feed. What happened was that:

When positive posts were reduced in the News Feed, the percentage of positive words in people’s status updates decreased by B = −0.1% compared with control [t(310,044) = −5.63, P < 0.001, Cohen’s d = 0.02], whereas the percentage of words that were negative increased by B = 0.04% (t = 2.71, P = 0.007, d = 0.001). Conversely, when negative posts were reduced, the percent of words that were negative decreased by B = −0.07% [t(310,541) = −5.51, P < 0.001, d = 0.02] and the percentage of words that were positive, conversely, increased by B = 0.06% (t = 2.19, P < 0.003, d = 0.008).

That is, when they selectively reduced the posts with negative content, people exposed to the less negative feed made more positive and fewer negative posts. When they selectively filtered out positive posts, users produced more negative and fewer positive posts. (However, Dylan Matthews argues that the methodology of the study was too weak to support this inference.)

So, did this study violate research ethics? There is a legitimate concern that the Facebook members who participated did so without being informed that they were in an experiment and without their having consented to being in the experiment. Ordinarily, participation in research requires informed consent. However, the Common Rule governing US research ethics says, at 45 CFR 46.117, that

(c) An IRB may waive the requirement for the investigator to obtain a signed consent form for some or all subjects if it finds…

(2) That the research presents no more than minimal risk of harm to subjects and involves no procedures for which written consent is normally required outside of the research context.

So, did the research present no more than minimal risk of harm? As you may have noted looking the results, the effects of the manipulation on the participants’ posting behavior were trivially small. Being in an experimental condition changed the number of emotion-related words in your Facebook posts by only fractions of a percent. Lots of things that we encounter in our everyday lives have much deeper effects than that, without requiring procedures for written consent. For example, would you feel obliged to ask a friend’s consent before telling her that your dog was injured by a car? You may well ask, however, how could the researchers could have known before they conducted the experiment that there would only be a small likelihood of harm? I assume that they either had pilot data about the likely sizes of the effects, or they could estimate those effects from previous social psychology literature. Estimating these harms are among the requirements for getting a study approved by an IRB and the IRB must make the determination that the research is safe before it is conducted.

So despite some overwrought claims, I think that it is unlikely that the Facebook study violated existing standards for research ethics. You may disagree, but if so I think it is likely that what you really think is that research ethics are too lenient in the kinds of studies they allow.

Nevertheless, there are other important ethical questions raised by the study.

First, it’s interesting to consider what made this study “human subjects research” in the first place. You might think that the Facebook study was “research” because it experimented on human beings. This is not the case. Facebook, along with Google and countless other websites, is constantly experimenting on you by systematically varying aspects of their formats and contents to determine what will best elicit your pageviews and clicks. So long as these experiments are solely for their private corporate use and are never published, these experiments do not constitute regulated human subjects research. It is only because Facebook published this work and was transparent about what it was doing — at least compared to standard corporate practice — that it was regulated. The fact is that our norms for scientific research that will be published are far stricter than those for corporate research that will not be published. You are not alone if this strikes you as nonsensical.

Second, my sense is that what upset many people about the Facebook study was not a possible violation of research ethics. Instead, the study revealed something about Facebook that seemed to violate their sense of privacy and autonomy, regardless of whether the study was research.

Many people do not understand that the News Feed does not give you an unfiltered view of what your Facebook friends are posting. The News Feed is engineered to present content that reinforces your continued use of Facebook. Is this a problem? Facebook might argue that it is learning about and responding to your preferences about what you want to see. But you could also say that the operation of the News Feed resembles how casinos manage the odds of slot machines so that you will keep on feeding them coins.

That the News Feed selectively mediates our conversations with our friends violates our intuitions about social interaction. When we are interacting with others, we experience ourselves as being in control of how we conduct that interaction. It is an unwelcome surprise that the mediating technology not only affects that interaction but that it is also being used by Facebook to steer us. It is particularly upsetting, I think, when it is our emotions that are being manipulated. We experience our emotions as who we are at the most intimate level and the covert manipulation of the emotions seems like a violation of our personhood.

The big question that the Facebook study poses, then, is whether we want to be part of Facebook at all. We have choices about which social media we want to use. For now, I’m staying on Facebook because so far, Facebook’s threat to personhood seems small. The effects on our emotions in the Facebook study were trivial. Based on the study, I don’t fear that Facebook could be used to orchestrate the Three-Minutes Hate. But the study shows that we need to be vigilant.

@Bill_Gardner

PS: Aaron pointed out to me that there are other important research ethics questions about the Facebook study. First, he noted that when the researchers were confronted with questions, they “deferred to Facebook” to discuss them. Apparently, they may have had to get clearance from the company to talk about the study. It’s a serious problem when corporations limit their employees ability to discuss a study. Second, he questioned whether the researchers had an assurance from Facebook that the company would not interfere with the results or methods in any way. It’s essential that researchers should be free to write whatever they want when they work for a company or are sponsored by the company.

UPDATE. It has been reported that Facebook carried out this experiment without prior IRB approval (see, for example here). In this post, I discussed that possibility that the IRB could have waived informed consent because of the very low risk posed by the intervention. However, if the IRB did not approve the study prior to its being carried out, this waiver could not have occurred. This is an ethical problem with the study. My view is that it gives increased reason for having a single ethical standard that applies to both academic scientific research and industrial research involving human subjects.

Hidden information below

Subscribe

Email Address*