Collection of robots saying “fuck this shit” (via pierregrassou)

The Facebook Loophole

As corporations become data powerhouses, will researchers be tempted to evade ethics reviews by offloading data collection to them? (Reblogged from Medium)

The media storm over Facebook’s recently published emotion contagion study has given the public an unlikely primer in research ethics: It is refreshing to find TechCrunch or Forbes discuss esoteric matters like “informed consent” or “Institutional Review Boards (IRB)”, otherwise confined to the late night programming of public broadcasters and the everyday sorrows of academics.

But beyond the question whether the study was legal (probably), ethical (your call), or a smart thing to do (…), it raises a bigger question: Is it a glimpse into an ungainly, gaping hole in current research ethics, opened by industry research and cozy industry-university relations?

The Cornell Double Whammy

A lot of the present kerfuffle revolves around the two university researchers, Jeffrey Hancock and Jamie Guillory, who co-authored the article and were employed at Cornell University when the study was conducted. As such, when they run studies with human subjects, they are typically subject to review and approval by an IRB, which typically requires getting informed consent from all participants. A nuisance for sure, but a critical safety check to ensure that the freedom of research doesn’t veer into things like theMilgram experiment, the Tusgekee syphilis study — or people freaking out that their social network of lacking choice unwittingly manipulated their emotions. Many academics feel that that no IRB in the world would have let the Facebook study happen as it did, certainly not without informed consent. So how could it still happen?

Enter Cornell University, which on June 30 released a curt media statement:

[The involved researchers] analyzed results from previously conducted research by Facebook into emotional contagion among its users. Professor Hancock and Dr. Guillory did not participate in data collection and did not have access to user data. Their work was limited to initial discussions, analyzing the research results and working with colleagues from Facebook to prepare the peer-reviewed paper […]. Because the research was conducted independently by Facebook and Professor Hancock had access only to results — and not to any data at any time — Cornell University’s Institutional Review Board concluded that he was not directly engaged in human research and that no review by the Cornell Human Research Protection Program was required.

Cryptic? Here’s the background: The US Federal Policy for the Protection of Human Subjects (“Common Rule”), which Cornell (and most other US research universities) abide by, requires that studies involving human subjects are reviewed and pre-approved by an IRB. The media statement appeals to two exemptions from the Common Rule — a legalistic double whammy:

1. “Look ‘ma, no hands!” - That little word “engage”

Technically, Cornell argues, their researchers weren’t even “engaged in human research”. According to the non-binding guidelines of the US Office of Human Research Protections,

an institution is considered 
engaged in a particular non-exempt human subjects research project when its employees or agents for the purposes of the research project obtain: (1) data about the subjects of the research through intervention or interaction with them; (2) identifiable private information about the subjects of the research; or (3) the informed consent of human subjects for the research.

The study article itself states in its footnotes that the university researchers “designed research” and “wrote the paper” — the research was “performed” and data “analyzed” only by the Facebook employee.

2. Found Footage: “existing data”

Note the phrase “previously conducted research” in the media statement. The Common Rule states that research is exempt from IRB review and approval if it involves

“the collection or study of existing data, documents, records, pathological specimens, or diagnostic specimens, if these sources are publicly available or if the information is recorded by the investigator in such a manner that subjects cannot be identified”.

Thus, the study could have gotten around IRB approval and informed consent if Facebook had already done the study on its own as part of its constant service tinkering, and after the fact the university researchers came along and said: Gee, we could use the data you generated for a nice journal paper. Again, the article itself suggests this as well:

“Which content is shown or omitted in the News Feed is determined via a ranking algorithm that Facebook continually develops and tests in the interest of showing viewers the content they will find most relevant and engaging. One such test is reported in this study: A test of whether posts with emotional content are more engaging.”

In sum, Cornell says all was kosher because, look, Facebook obtained the data: We never “engaged” in any research here, we just analysed and wrote about “existing data”!

Given that the Cornell researchers “designed [the] research” (which happens before you run a study — one would hope) and engaged in “initial discussions” with the Facebook author who ran it, this is not just disingenuous: If this became common practice, it would tear a massive hole into the ethical checks and bounds on human subject research.

The Buddy Shortcut

University and industry researchers collaborate all the time, and researchers regularly switch camps from academic halls to industry labs and back. Companies like Google, Microsoft, or Facebook all have research departments that sponsor and present at academic conferences, complete with recruitment booths to hire PhDs and post-docs fresh from the mint.

If the logic of Cornell’s defence were to gain foothold, at such an event, university researchers could over a beer with their industry friends “just have an idea for a study” — and three months later magically find an email by their industry pal, telling them that “what a coincidence”, the data s/he gathered provides just the right material for the study the university colleague had in mind: shouldn’t we write a paper about this together? Whenever there is a topic that interests both company and university researchers, the latter could conveniently circumnavigate all those pesky IRB and informed consent processes by handing over the “data obtaining” part to their less regulated counterparts.

I perfectly agree with danah boyd that IRBs and informed consent do not equal ethical deliberation: they are often treated as a formality to protect universities from legal liability. Researchers often try to game them to do what they want to do anyhow. The Facebook study highlights one (new?) way of gaming the system and thus only illustrates, as boyd holds, that “We need ethics to not just be tacked on, but to be an integral part of how everyone thinks about what they study, build, and do.”

Update (July 4): The Corporate Vacuum

Let’s shift focus from the scientists to the Facebook employee. The editor-in-chief of the journal that published the study, the Proceedings of the National Academy of Sciences of the United States of America (PNAS), just released an “Editorial Expression of Concern” that will be printed in the same issue as the study itself. It states:

Obtaining informed consent and allowing participants to opt out are best practices in most instances under the US Department of Health and Human Services Policy for the Protection of Human Research Subjects (the “Common Rule”). Adherence to the Common Rule is PNAS policy, but as a private company Facebook was under no obligation to conform to the provisions of the Common Rule when it collected the data used by the authors, and the Common Rule does not preclude their use of the data. Based on the information provided by the authors, PNAS editors deemed it appropriate to publish the paper. It is nevertheless a matter of concern that the collection of the data by Facebook may have involved practices that were not fully consistent with the principles of obtaining informed consent and allowing participants to opt out.

This statement is not fully correct: The PNAS “Editorial Policy” sets out its own demands for studies published in it, including “Research involving Human and Animal Participants and Clinical Trials must have been approved by the author’s institutional review board” and “For experiments involving human participants, authors must also include a statement confirming that informed consent was obtained from all participants.” These are similar to the Common Rule, but don’t demand adherence to it. Why is that important? Because a Facebook employee — Adam Kramer — was a co-author, and indeed the lead author of the paper. Following thearticle, he “performed research”: he did not pick up “existing data” somehow “lying around” in Facebook.

The Cornell press statement logic allows scientists to publish on Common Rule-exempt data as long as they pay formal lip service to it being “existing”. The PNAS logic is even more troublesome: Because Facebook is a private entity not beholden to the Common Rule, it is apparently also not beholden to the editorial policies of scientific journals if its employees wish to publish in them. Imagine, for the sake of argument, that the study had been single-authored by the Facebook employee. Following PNAS, this may be “a matter of concern”, but it would be permissible for him/her to run the study and publish its results in the PNAS, all without review and approval of an ethics board, and without informed consent. (At most, Facebook author A would need a university researcher sidekick B later exempting A’s data collection as B’s secondary use — or maybe find a Facebook employee C who folds A’s research study into C’s ‘business as usual’ A/B test, so that A could use it as “existing data”. The Common Rule and journal policies typically require IRB approval from all involved institutions/researchers to prevent this kind passing the buck.)

In Sum

As I’ve argued previously, this is just one outgrowth of digital networked technology enabling new entrants to engage in activities that were previously confined to an elect few — like large-scale human subject research. One major issue is that these new entrants are typically not subject to existing laws and regulations (no Common Rule for Facebook), and fervently resist being subjected to them (see Uber, AirBnB, etc.). That is, they claim that somehow, “because new technology, old rules don’t apply.” And as the Facebook emotion study shows, not only does that create an unregulated Wild West for new actors: It also seduces old actors to weasel themselves out of their laws and regulations.

"[L]et’s preserve the term ‘sharing,’ reserving it not for anti-economic niceness, but for economic relations that have a social thickness to them. […] In the end, sharing is about the messy negotiation of access to goods, goods that in the name of sustainability become more scarce. Capitalism is an alienated way of handling those negotiations; sharing forces you to negotiate with aliens."
-

Cameron Tonkinwise, Sharing you can Believe in

This reminds me of Marc Hassenzahl’s and Matthias Laschke’s “Aesthetic of Friction”. To them, “pleasurable troublemakers" foster deliberation, delight, and lasting change. To Tonkinwise, the social friction of sharing services like AirBnB infuses the owned-goods-based, Amazon.com-supplied "asocial suburban bunkers" with some semblance of "mechanical solidarity”.

Frame Clashes, or: Why the Facebook Emotion Experiment Stirs Such Mixed Emotion

The article “Experimental evidence of massive-scale emotional contagion through social networks”, published in the Proceedings of the National Academy of Sciences (PNAS), is currently causing quite some emotion itself – definitely more than the study it reports.

Here’s an abstract of the article (and here’s the full text):

We show, via a massive (N = 689,003) experiment on Facebook, that emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. We provide experimental evidence that emotional contagion occurs without direct interaction between people (exposure to a friend expressing an emotion is sufficient), and in the complete absence of nonverbal cues.

In short, one member of Facebook’s Core Data Science team and two researchers from (then) Cornell University got together to analyse and manipulate people’s Facebook news feeds. They used a software to count whether news feed items contained mainly emotionally positive or negative words. For a week, they tweaked the news feed algorithm to show less of these emotionally charged posts: One study group saw less positive items, the other less negative ones. People who saw less posts of one emotion subsequently posted less posts with that emotion themselves. This, the authors of the study argue, demonstrates that emotions can be transferred between people online. Others counter that the effect is so small to be an irrelevant statistical blip.

At least in my online social circles, it’s hard to keep up with the debate, playing out (with no little irony) chiefly on Facebook discussion threads. Standard reactions are split: among scholars I observe (a) disturbance and disgust, (b) concerns over communication power, or (c) complaints that the “sexy” large Facebook data set (690,000!) led the authors to oversell a non-result, and the journal to publish a study with methodological issues. Reactions in non-scholarly circles are usually veering between “creepy" and "what’s the fuss?", expressed e.g. by venture capitalist Marc Andreessen:

This split reaction I think shows a clash of different ways the study is framed, which points to the larger issue how we should frame and regulate private entities engaging in scientific research – and even more fundamentally, how to frame and regulate digital entrants to existing social fields. But before we get to that, for the non-academics, let’s quickly review the facts what exactly makes academics irate about the study.

The Facts

The study intentionally tried to manipulate people’s emotional states, which for academic researchers constitutes “human subject research”: a systematic investigation to develop generalizable knowledge by gaining data about living individuals through intervention. Incidents like the infamous 1963 Milgram experiment led the research community to regulate this kind of experiments with specific ethics codes and procedures, most notably the US Federal Policy for the Protection of Human Subjects (aka “Common Rule”), which requires that all US federally funded research be reviewed and approved in advance by a so-called Institutional Review Board (IRB) that ensures that risks to participating subjects are minimized, are reasonable in relation to anticipated benefits, and that subjects have given appropriate informed consent.

This immediately raises five questions:

  1. Did the study require review and approval by an IRB?
  2. Has the study been reviewed and approved by an IRB?
  3. Did the study require informed consent?
  4. Have subjects given informed consent?
  5. Were risks to subjects minimized and reasonable in relation to anticipated benefits?

The PNAS article on its own is not very clear on these matters. Consequently, a lot of online heat concerned these questions. So let’s review them in turn.

1. Did the study require review and approval by an IRB?

A June 10 press story by Cornell University stated that the Facebook study was at least partially funded by a US governmental body, the Army Research Office, and thus beholden to the Common Rule. But in response to the kerfuffle about the study, Cornell University added a correction on June 29, stating:

"In fact, the study received no external funding."

That Cornell’s press office feels the need to publish this correction is interesting on its own, but it doesn’t change whether the involved researchers were subject to the Common Rule: According to Cornell’s own human research policies,

"all research activities that involve the collection of information through intervention, interaction with, or observation of individuals, or the collection or use of private information about individuals, must be evaluated to determine whether they constitute human participant research, and the type of review required before the research activities can begin".

Those policies include Cornell’s agreement to the FederalWide Assurance that all human subject research at Cornell is beholden to the Common Rule, federally funded or not. It also states that the Declaration of Helsinki applies – an international standard for medical human subject research that requires approval by a research ethics committee and informed consent, similar to the Common Rule.

But were the involved researchers actually engaged in the research? This admittedly sophistic-sounding (and -being) question is often of quite practical concern in collaborative research. According to the non-binding guidelines of the US Office of Human Research Protections,

an institution is considered engaged in a particular non-exempt human subjects research project when its employees or agents for the purposes of the research project obtain: (1) data about the subjects of the research through intervention or interaction with them; (2) identifiable private information about the subjects of the research; or (3) the informed consent of human subjects for the research. 

The PNAS article states that the university researchers designed the research and wrote the paper – the data was collected and analyzed by the Facebook employee. This is the stance that Cornell University officially took in a June 30 press release:

"[The two involved Cornell researchers] analyzed results from previously conducted research by Facebook into emotional contagion among its users. [… They] did not participate in data collection and did not have access to user data. Their work was limited to initial discussions, analyzing the research results and working with colleagues from Facebook to prepare the peer-reviewed paper […]. Because the research was conducted independently by Facebook and Professor Hancock had access only to results – and not to any data at any time – Cornell University’s Institutional Review Board concluded that he was not directly engaged in human research and that no review by the Cornell Human Research Protection Program was required."

Note the phrase “previously conducted research”. The Common Rule states that research is exempt from full IRB review and approval if it involves

"the collection or study of existing data, documents, records, pathological specimens, or diagnostic specimens, if these sources are publicly available or if the information is recorded by the investigator in such a manner that subjects cannot be identified".

Thus, the study could have gotten around IRB approval and informed consent if Facebook had already done the study solely on its own as part of its constant service tinkering, and after the fact, the researchers from Cornell came along and said: “Gee, we could use the data you generated for a nice journal paper.” The paper itself suggests this by stating:

“Which content is shown or omitted in the News Feed is determined via a ranking algorithm that Facebook continually develops and tests in the interest of showing viewers the content they will find most relevant and engaging. One such test is reported in this study: A test of whether posts with emotional content are more engaging.”

However, the “Author Contribution” section of the paper indicates that the university researchers “designed research” – which one would think ought to happen before a study is performed. Note also that according to Cornell’s press release, the university researchers were involved in “initial discussions”. Still, following an email by PNAS editor Susan Fiske, analysis of existing data is what got the study exemption: ”Their [the authors’] revision letter said their had Cornell IRB approval as a ‘pre-existing data set’ presumably from Facebook”, she writes.

So following Cornell University logic, the study is human subject research, but did not require IRB review and approval because the involved university researchers (a) weren’t even technically “engaged” in the research and/since (b) they only analyzed “existing data” – a legalistic double-whammy that, as Michelle N. Meyer put it, is mighty disingenuous. It would also open a massive legal loophole in university-industry research collaborations (see below).

But all of this doesn’t matter: the study still did require IRB review and approval. Why? Because, as David Gorski notes, the journal policies of the PNAS require that for any article published in it:

"Research involving Human and Animal Participants and Clinical Trials must have been approved by the author’s institutional review board. […] Authors must include in the Methods section a brief statement identifying the institutional and/or licensing committee approving the experiments. […] All experiments must have been conducted according to the principles expressed in the Declaration of Helsinki."

If the study constituted medical human subject research (to which the Helsinki declaration speaks) is arguable. That it involved human participants and thus needed review and approval by an IRB is not. The weird regulatory netherland here is that the Cornell authors claim they got exemption approval because they used existing data (from Facebook), and the Facebook study author … didn’t have a regular IRB because Facebook is no university, and apparently therefore did not need one. The PNAS editor of the article, Susan Fiske, stated that “Facebook’s research is not government supported, so they’re not obligated by any laws or regulations to abide by the standards”. How that makes Facebook exempt from the PNAS own policies, however, I don’t know.

2. Has the study been reviewed and approved by an IRB?

According to an interview with Fiske, yes. According to a later Forbes source, the data collection itself was only reviewed internally by Facebook. According to a still-later e-mail by Fiske, the analysis of the data set was reviewed and approved by a university IRB. That Cornell University states the IRB found the study exempt from review, but the authors themselves (according to Fiske) did acquire IRB approval is confusing. If we give the benefit of the doubt, this is just a confusion of words: The IRB reviewed whether the study would be subject to full review and approval, and judged it exempt because the researchers were not involved.

3. Did the study require informed consent?

If one follows the Cornell argument that the human subject research was exempt because it involved “existing data” and the researchers weren’t even “engaged” in obtaining it, then no, the study didn’t require informed consent by Common Rule. However, again, the journal policies of the PNAS catch on:

"For experiments involving human participants, authors must also include a statement confirming that informed consent was obtained from all participants."

So yes, it did – at least in order to be publishable in the PNAS. (In addition, because people can sign up for Facebook from age 13 on, and study participants were apparently randomly selected by their user ID, the study affected minors, a vulnerable population where the Common Rule requires explicit assent from the minors themselves and their parents or guardians, even under minimal risk.)

4. Have subjects given informed consent?

The article itself argues that agreeing to Facebook’s Data Use Policy equals informed consent:

“Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research.”

When users first sign up, they consent to Facebook’s “Data Use Policy”, which at one point in its 9,405 words states that Facebook may use people’s information “for internal operations, including troubleshooting, data analysis, testing, research and service improvement.” Interestingly, that whole statement was only added to the data use policy in May 2012, three months after the study was conducted.

Facebook responded, stating

“When someone signs up for Facebook, we’ve always asked permission to use their information to provide and enhance the services we offer. To suggest we conducted any corporate research without permission is complete fiction. Companies that want to improve their services use the information their customers provide, whether or not their privacy policy uses the word ‘research’ or not.”

Indeed, the relevant passage is part of a non-exhaustive “for example” list, and the pre-May 2012 data use policy stated that granting permission to use data “allows us to provide you with innovative features and services we develop”. But not matter what the Data Use Policy states: Does agreeing to it constitute informed consent?

Informed consent, as set out by the Common Rule, involves “legally effective informed consent of the subject […] under circumstances that provide […] sufficient opportunity to consider whether or not to participate […] in language understandable to the subject”, and at least:

“(1) A statement that the study involves research, an explanation of the purposes of the research and the expected duration of the subject’s participation, a description of the procedures to be followed, and identification of any procedures which are experimental;
(2) A description of any reasonably foreseeable risks or discomforts to the subject;
(3) A description of any benefits to the subject or to others which may reasonably be expected from the research;
(4) A disclosure of appropriate alternative procedures or courses of treatment, if any, that might be advantageous to the subject;
(5) A statement describing the extent, if any, to which confidentiality of records identifying the subject will be maintained;
(6) For research involving more than minimal risk, an explanation as to whether any compensation and an explanation as to whether any medical treatments are available if injury occurs and, if so, what they consist of, or where further information may be obtained;
(7) An explanation of whom to contact for answers to pertinent questions about the research and research subjects’ rights, and whom to contact in the event of a research-related injury to the subject; and
(8) A statement that participation is voluntary, refusal to participate will involve no penalty or loss of benefits to which the subject is otherwise entitled, and the subject may discontinue participation at any time without penalty or loss of benefits to which the subject is otherwise entitled.”

I agree with James Grimmelman and others that the study fails these criteria by a long shot: Saying “yes” to a number of obnoxiously long and likely unread Terms of Service and Use Policy docs allowing “to provide you with innovative features and services we develop” is not providing understandable language and sufficient opportunity to ponder whether to participate in this specific study. Participants did not learn about the existence and purpose of the study (1), forseeable risks (2) or benefits (3), let alone did they have any opportunity to refuse or discontinue participation, or were made aware of this opportunity (8).

(If you say “just don’t use Facebook”, that’s arguably a significant “loss of benefits”, given the status of Facebook as a quasi-public sphere these days – people couldn’t cease participating in this experiment without also ceasing to use Facebook as a whole.)

Could the study be exempt from requiring informed consent in these strict terms? Remember that the paper itself claims it did acquire informed consent, so the question is moot. Had it not done so, the criteria for exemption or alteration are that

“(1) The research involves no more than minimal risk to the subjects;
(2) The waiver or alteration will not adversely affect the rights and welfare of the subjects;
(3) The research could not practicably be carried out without the waiver or alteration; and
(4) Whenever appropriate, the subjects will be provided with additional pertinent information after participation.”

Michelle N. Meyer thinks “minimal risk” is “a winning argument”, and both the editor of PNAS and the involved IRB apparently followed this logic, based on the rationale that Facebook is already exposing its users to tweaked news feed algorithms all the time. Says Fiske:

“I was concerned […] until I queried the authors and they said their local institutional review board had approved it — and apparently on the grounds that Facebook apparently manipulates people’s News Feeds all the time.”

Even if one holds that altering the total emotional tone of displayed news feed items is only “minimal risk” (see below), that waiving consent wouldn’t affect the subjects negatively, and that the study couldn’t be conducted any way else, this would still arguably require that participants learn that they participated in a study after the fact (4), which as far as we know did not happen.

And be that as it may, the PNAS journal policy required informed consent, and the authors claimed they obtained it via users’ agreement to Facebook’s Data Use Policy. 

3. Were risks to subjects minimised and reasonable in relation to anticipated benefits?

This is a matter of debate. Given how small the measured effect was, does exposing users to News Feed items with more total or negative sentiment present more than “minimal risk” to their well-being? The main point I have seen raised is the following: Mood disorders like depression are widespread – according to the NIMH, they currently affect 9.5% of all US Americans. With Facebook’s vast user base, the experiment likely touched a sizeable number of people suffering from a mood disorder – constituting what the Common Rule calls “vulnerable populations”, which require extra care and safeguards. Unwittingly exposing depressive people to content of more negative mood, with no chance to opt out, arguably might present more than minimal risk to a vulnerable population. 

Adam Kramer, the Facebook employee involved in the study, responded to public reactions:

"And at the end of the day, the actual impact on people in the experiment was the minimal amount to statistically detect it — the result was that people produced an average of one fewer emotional word, per thousand words, over the following week."

This implies they tried (and succeeded) to create an intentionally very small intervention. And as Tal Yarkoni has pointed out, posting different words and actually feeling differently are two different things. Tweets like “I wonder if Facebook KILLED anyone with their emotion manipulation stunt. At their scale and with depressed people out there, it’s possible.” are certainly massively overwrought. 

However, whether the actual effect was negligible in the end is not as relevant as whether the authors and IRB could reliably predict that it will be negligible. IRBs typically operate on a “better safe than sorry” precautionary principle. But again, if the Cornell logic holds that the study was exempt from the Common Rule, and one claims that the study also doesn’t fit human subject research regulated by the Helsinki Declaration (demanded by the PNAS journal policy), then the authors weren’t even required to minimize risks, because whatever risks there were, Facebook kindly already exposed its users to them before IRB-bound researchers ever laid hands on the resultant data.

The Mess

In sum, the Cornell IRB thought the study exempt from IRB approval and, presumably, informed consent because its researchers never “engaged” in research, only working with “existing data” by Facebook. The PNAS journal editor thought it prudent to “not second-guess the relevant IRB”, whose exemption apparently somehow also exempted Facebook from the IRB review and approval required by PNAS policy. The authors claim that agreeing to Facebook’s Data Use Policy (which at the time didn’t even spell out “research” as a possible data use) equals informed consent. The academics who are getting irate over the study disagree – and for what it’s worth, I find myself in the latter camp:

1. Claiming you are not “engaged” in a research study and only use “existing data” when you apparently had a hand in designing the study before someone else collects the data, even if only through “initial discussion”, is simply disingenuous. If this argument were to get a foothold, it opens a gigantic loophole. University researchers could over a beer with their industry researcher friends “just have an idea for a study”, and three months later magically find an email by their industry colleague who informs them that “what a coincidence”, the study s/he just did provides just the right data for the study the university colleague had in mind: shouldn’t we write a paper about this together?

2. Claiming that agreeing to the Facebook Data Use Policy equals Common Rule informed consent is likewise disingenuous. 

3. PNAS could accept the Cornell IRB exempting the Cornell co-authors from IRB approval and informed consent as demanded by PNAS policy, but never the Facebook co-author (in fact, lead author of the paper), who definitely “performed the research”, as it states in the article – he did not take secondary data from some other Facebook employee (which would just open another loophole). Whether Facebook is bound to the Common Rule is irrelevant: By publishing his results in PNAS, the Facebook author subjected himself to the journal’s own policy.

My sense is that the authors got together, had a nice idea for a nice study, thought, “hey, you, the Facebook author, you can run the study without going through all the IRB hassle, and then we can analyze the data and write a paper together.” Said and done and approval-waived by the Cornell IRB. Then the PNAS policies required informed consent, and because the authors wanted to publish in this high-profile journal, they just claimed they got via Facebook registration to get their paper in. Not really kosher. Then excrements hit the ventilator, and now people are understandably covering their behinds and stumble over their feet: the study wasn’t even federally funded – no wait, they used only existing data! – no, wait, our researchers didn’t even “engage”! Only the article itself says they “designed research”, the press statement says they did “initial discussion” and also “data analysis”, whereas the article says they didn’t analyze the data.

The Frames

As noted, the interesting thing for me is how differently people evaluate the study. The public comments on MetaFilter are quite exemplary. On the one side, the academic camp:

“I don’t remember volunteering to participate in this. If I had tried this in graduate school, the institutional review board would have crapped bricks.” (mecran01

On the other side, the “online business as usual” camp, arguing (like Fiske) that this is what Facebook et al. are doing all the time with/for targeted advertising anyhow:

“I am shocked. SHOCKED.
Does anyone actually pretend Facebook does anything for altruistic purposes?
Standard reminder: if something is free to you, you are not the customer. You are the product.” (dry white toast)

“The advertising algorithm is basically doing the same thing - trying to alter our mental state with regards to certain products. Now they’ve shown (probably obvious in retrospect ) that they can do it by manipulating what we see, without injecting ads into our stream. So what happens if the Republicans (or Democrats, pick your poison) throw silly money at FB in an exclusive contract to push us left or right in our voting tendencies?” (COD

The following tweet by Chris Dixon maybe expresses this clash best:

I have participated in many a discussion with scholars half-jokingly complaining that they have to go through lengthy IRB reviews for things every private individual or company do every day without even thinking about it. Recovering communication researcher and budding sociologist, this strikes me as a clashing of frames – different ways of understanding and evaluating events by ascribing them to different types of situations. Let me explain.

Academic researchers are brought up in an academic culture with certain practices and values. Early on they learn about the ugliness of unchecked human experimentation. They are socialized into caring deeply for the well-being of their research participants. They learn that a “scientific experiment” must involve an IRB review and informed consent. So when the Facebook study was published by academic researchers in an academic journal (the PNAS) and named an “experiment”, for academic researchers, the study falls in the “scientific experiment” bucket, and is therefore to be evaluated by the ethical standards they learned in academia.

Not so for everyday Internet users and Internet company employees without an academic research background. To them, the bucket of situations the Facebook study falls into is “online social networks”, specifically “targeted advertising” and/or “interface A/B testing”. These practices come with their own expectations and norms in their respective communities of practice and the public at large, which are different from those of the “scientific experiment” frame in academic communities. Presumably, because they are so young, they also come with much less clearly defined and institutionalized norms. Tweaking the algorithm of what your news feed shows is an accepted standard operating procedure in targeted advertising and A/B testing.

If anything, people who frame the Facebook study as “A/B testing as usual” might be disturbed by the fact that the algorithm was tweaked to directly affect emotions, which is (still) unexpected in this frame. Hence the assumption in responses on MetaFilter and elsewhere that Facebook does so in order to create more effective ads in the end, which is an expected, usual purpose of A/B testing (insert creepy conspiracy theory about retail therapy and loneliness loops here).

Along these lines, Marc Andreessen framed “online social networks” as “communication in general”, which always has an emotion-affecting component:

Helpful hint: Whenever you watch TV, read a book, open a newspaper, or talk to another person, someone’s manipulating your emotions!

— Marc Andreessen (@pmarca) June 28, 2014

To which Martin Bryant responds on The Next Web:

Andreessen argues that TV shows, books, newspapers and indeed other people deliberately affect our emotions.

The difference, though, is that we willingly enter into these experiences knowing that we’ll be frightened by a horror movie, uplifted by a feel-good TV series, upset by a moving novel, angered by a political commentator or persuaded by a stranger. These are all situations we’re familiar with – algorithms are newer territory, and in many cases we may not know we’re even subjected to them.

We are used to emotional appeals in “face to face communication” or “advertising” frames. We are used to (and willingly expose ourselves to) emotional effects in “fictional media consumption” frames. But that “online social networks” intentionally affect emotions through algorithms is new, unusual to this frame and therefore feels “manipulative”, “creepy” to some.

These differences in framing not only show in whether (or how much) people find the study problematic, but also in what they take issue with. In online social networks, the standard “problem” is privacy and data protection: simply put, who is exposed to your personal information. In scientific experiments, just as important as privacy is what information you are exposed to (as an “experimental stimulus”) with what potential harmful effects, and whether you consented to that.

The authors of the study themselves seem to appeal to the “online social network” frame when they state that the study was ethically unproblematic because the individual news feed items were all handled by software “such that no text was seen by the researchers.” The ethical issue they address is privacy, not harmful effects. Same when a Facebook spokesperson defended the study stating “none of the data used was associated with a specific person’s Facebook account”: privacy, not harmful effects.

Similarly, most newsstories and blog entries chiefly highlight the standard online social network issues of (a) communication power and (b) filter bubbles: Manipulations like these show how much power online companies like Facebook have over us, and filtering information by sentiment could keep us in a Huxleyan SNAFU bubble.

In sum, this study stirs such split reactions, heated emotion, and cognitive dissonance (how can the same A/B test be a/okay in business and bad in science?) because it presents something that mixes and breaks frame expectations of different communities: Facebook itself, the study authors (apparently), and people like Andreessen frame it as the next iteration of A/B testing and ad targeting – “no biggie”. Others find it breaks their “online social network” frame expectations and feel “creeped out”. Academic researchers who frame it as “scientific experiment, period” cry “unethical”. It’s interesting that online news media like The Atlantic and Slate picked up the academic “scientific experiment” frame to give words to the feeling that online emotion manipulation is manipulative in an uncanny way (a framing first established by Animal New York and A.V. Club). Before these two online media gave the story the “creepy” spin, most reporting was chiefly of the factual, newsy “isn’t this interesting” variety, focusing on the findings: “News feed: ‘Emotional contagion’ sweeps Facebook”, titles the Cornell Chronicle on June 10, “Even online, emotions can be contagious”, we read on the New Scientist on June 26.

So What?

This clashing, overlapping, and breaking of frames demonstrates once more how digital networked media break down existing social categories. For a long time, experiments manipulating people’s psychological states were only done in research institutions by academic or industry researchers socialized in academic research norms and practices. With the pervasive shift of social interaction onto digital, networked platforms, and the rise of easy A/B testing of all elements of these platforms, more and more individuals with no socialization or training in experimental practices and norms get to engage in massive de facto human subject experiments, employed by organizations (like Facebook) that do not fall under the purview of existing laws and regulations for human subject research.

This was principally already an issue when businesses started doing market research, or when, more recently, software companies started hiring usability engineers and user researchers. But market and user researchers were typically recruited from academic research backgrounds (e.g. in sociology, psychology, human-computer interaction), and so they brought their norms and practices with them: You’d be hard-pressed to find a “traditional” market research or usability agency that doesn’t use some form of gathering informed consent as part of their surveys or interviews.

With tools like Google Website Optimizer or Facebook ad campaigns, the capacity to run de facto experiments is mass-democratized to social media editors, product managers, software developers, and basically everyone who runs a website. This is new. It doesn’t match our socially shared, institutionalized frames and connected norms. And so we freak and fight.

More generally, digital networked technologies allow new entities to perform actions and assume social functions that were previously limited to a pre-defined set of existing actors: Online companies become de facto research institutions, not beholden (or so they claim) to the norms, laws, and regulations of academic research. Facebook and Google become de facto news publishers, not beholden (or so they claim) to the norms, laws, and regulations of the news industry. Uber and Lift become de facto taxi providers, not beholden (or so they claim) to the norms, laws, and regulations of the taxi industry. YouTube et al. become de facto television channels, not beholden (or so they claim) to the norms, laws, and regulations of broadcasting. Etc. etc.

The manifest risk in all these instances is that these new, digital networked entrants undermine and circumnavigate hard-won public accords enshrined in laws, regulations, and norms of communities of practice, under the ruse that “new technology” somehow means that “old rules don’t apply”. I don’t say it never does. In the case of copyright, I follow Lawrence Lessig’s line that the unchanged application of existing law to new technology chiefly enshrined the interests of incumbents and stifled creativity, instead of serving the underlying values (progress, author’s rights) copyright is supposed to serve. Utopians will always promise and demand the new tech deliver us from evil “red tape”, and Apocalyptics will always fear the dissolution of the social contract. We need a debate what values we share and what rules best serve them under the conditions of new technology. 

The Facebook study is especially interesting and complicated because it was conducted by both new entrants (Facebook) and existing actors (two university researchers). But if we take the latter two away, the more important question remains: Do we, as a public, want companies like Facebook to be able to do large scale human subject research outside regulatory and normative frameworks? If no, then what kind of norms and regulations do we want? How can we safeguard that large-scale, fine-grained human subject research – both by corporate entities and individuals – does not harm the individual and public good? What old values still apply, new technology or not?

And how to effectively enforce these values? Michelle N. Meyer, Tal Yarkoni Brian Keegan, danah boyd and Farhad Manjoo all fear that the current public outcry will make companies like Facebook less likely to cooperate with university researchers: as a result, they will be even less transparent, their data won’t contribute to the public advancement of knowledge, and we’ll be less able to defend ourselves against company influence. Meyer therefore argues we should loosen academic research control to enable it to become a critical watchdog. For Keegan, researchers should work more with Facebook if one wants them to be “deeply embedded within and responsible to the broader research community”. But are academics really able to do so on their own, given the power asymmetries between them and Internet companies? Can they withstand the Siren call of cozy academy-industry collaborations, when academe demands high-profile publications, and Facebook et al. can conveniently generate “sexy” data sets outside of IRB demands, for the non-“engaged” university colleagues to publish thusly? This is the discussion we need to have.

Update (June 29)

Tal Yarkoni and Brian Keegan take a sober, de-escalating stance, observing that the reported effect is so miniscule that not only is the news reporting about “emotion manipulation” overwrought: the main contention should be that the authors are overselling a non-effect.

Second, they speak to framing as well:  Tal Yarkoni does so implicitly when he contextualizes research and emotion manipulation as everyday reality:

"The reality is that Facebook–and virtually every other large company with a major web presence–is constantly conducting large controlled experiments on user behavior. […] you should probably also stop using Google, YouTube, Yahoo, Twitter, Amazon, and pretty much every other major website–because I can assure you that, in every single case, there are people out there who get paid a good salary to… yes, manipulate your emotions and behavior! […] it’s worth keeping in mind that there’s nothing intrinsically evil about the idea that large corporations might be trying to manipulate your experience and behavior. Everybody you interact with–including every one of your friends, family, and colleagues–is constantly trying to manipulate your behavior in various ways.”

I perfectly agree – and make the obvious sociological move to ask: So if that’s the case, then why do people take offence in this case, but not in these other, day-to-day cases? I argue because of different framings activating different frame-specific norms and values.

Brian Keegan is explicitly on the point, speaking of framing. Every online service these days, he states, employs A/B testing:

"Creating experiences that are “pleasing”, “intuitive”, “exciting”, “overwhelming”, or “surprising” reflects the fundamentally psychological nature of this work: every A/B test is a psych experiment.

Somewhere deep in the fine print of every loyalty card’s terms of service or online account’s privacy policy is some language in which you consent to having this data used for “troubleshooting, data analysis, testing, research,” which is to say, you and your data can be subject to scientific observation and experimentation. Whether this consent is “informed” by the participant having a conscious understanding of implications and consequences is a very different question that I suspect few companies are prepared to defend. But why does a framing of “scientific research” seem so much more problematic than contributing to “user experience”? How is publishing the results of one A/B test worse than knowing nothing of the thousands of invisble tests? They reflect the same substantive ways of knowing “what works” through the same well-worn scientific methods.”

Indeed: “why does a framing of ‘scientific research’ seem so much more problematic than contributing to ‘user experience’?” Because these are different frames contextually actualizing different norms and values.

Updates (June 30)

Wow. Lots of things can happen in a day. We’ve learned the data collection wasn’t IRB-approved, but the data analysis was as a study of “pre-existing data”. One of the authors of the study responded. Cornell corrected their news story, stating the study was not, as claimed, externally funded. Michelle N. Meyer and Zeynep Tufekci have weighed in with interesting opinions – and I’ve tried to incorporate all those in the post above.

Updates (July 1)

So. Cornell released a press statement that the study required no IRB review because the researchers were not “engaged” in it, only doing analysis of “existing data”. Forbes found that Facebook’s Data Use Policy didn’t even contain an explicit mention of “research” when the study was conducted. Michelle N. Meyer posted an updated version of her excellent analysis, finding in favour of Cornell’s IRB on Wired. And the longer I thought about things, the more I found the issue to be the loophole this study points to: academic and industry research buddies shoving “inconvenient” human subject research into IRB-free industry paradise to then analyze the “existing data” later – which I singled out here. All included.

Updates (July 2)

Both the UK Information Commissioner’s Office and the Irish Office of Data Protection have started investigating whether Facebook’s study broke local data protection laws. (Of note: both regulators and Facebook frame what’s happened as an issue of privacy and data protection, not potential harm and informed consent.) Facebook’s Head of Global Policy Management was probed by journalists during the Aspen Ideas Festival, who pushed against calls for more regulation with the expectable claim that this might stifle creativity, innovation, and free speech. Transparency not regulation would be needed. Kevin Schofield of Microsoft Research has published another in-depth analysis of the study’s ethics, with useful pointers to previous academic debates about equating agreement to Terms of Service with informed consent:

"this question has been debated for 15+ years. Here’s a pointer to  a report from a 1999 workshop in the US that addressed issues around doing research on the Internet. Here’s another from a 2007 workshop in the UK, in which they specifically discuss the problem of whether clicking “agree” on a terms & conditions page is acceptable as “informed consent” and their conclusion is “no” — because it’s well understood that most people don’t read it. Legally, it’s acceptable and you probably couldn’t get sued. Ethically and professionally, it’s not acceptable.”

 So much for the hard news.

In the life cycle of any issue, there come the stages of “what’s it all about” and “so what do we do”: where the news-triggering event is contextualized as part of a “larger issue”, and then translated into demands. Judging yesterday’s news fallout, we’ve arrived.

Zeynep Tufekci started the “big picture” framing on June 29. To her, the study indicates a big shift in power and social control:

“Today, more and more, not only can corporations target you directly, they can model you directly and stealthily. […] the powerful have increasingly more ways to engineer the public, and this is true for Facebook, this is true for presidential campaigns, this is true for other large actors: big corporations and governments. […] That, to me, is a scarier and more important question than whether or not such research gets published.”

On July 1, Sara M. Watson titles in The Atlantic that “What the Facebook Controversy is Really About” is “Data Science”: The appeal to “science” (and implied contribution to the public good) Facebook and other for-profit Internet companies make when they label their applied research “data science”. Yet their work is inevitably interest-laden and limited to their own data sets.

What does the Facebook experiment teach us?”, asks danah boyd on July 2. The “bigger isssues at stake” she sees are research and corporate ethics in an algorithmic world: “We need ethics to not just be tacked on, but to be an integral part of how everyone thinks about what they study, build, and do.” She finds academic demands for IRBs and informed consent shortsighted because IRB + informed consent != ethical deliberation: “IRBs are an abysmal mechanism for actually accounting for ethics in research. By and large, they’re structured to make certain that the university will not be liable. Ethics aren’t a checklist.” And she sees the public outcry against Facebook as a manifestation of people’s latent feeling of helpless delivery towards the power of Facebook and other companies: “anger at the practice of big data”.

Janet Vertesi suggests “The Real Reason You Should Be Worried About That Facebook Experiment" is that it shows the shift from increasingly slashed public to increasing private funding of online social science research.

Moving on to “what to do”, boyd suggests “any company that manipulates user data create an ethics board.” For Evan Selinger and Woodrow Hartzog, the issue highlighted by the Facebook study is that corporations “make us involuntary accomplices” (and often unwitting ones, too). Individual action and responsibility can’t address that: We need communal action, namely, a “People’s Terms of Service Agreement—a common reference point and stamp of approval, like a Fair Trade label for the web, to govern the next photo-sharing app or responsible social network.”

Updates (July 3)

On July 2, Facebook COO Sheryl Sandberg apologized (sort of) by framing the study as standard A/B testing, saying sorry for the flawed communication of that framing:

“This was part of ongoing research companies do to test different products, and that was what it was; it was poorly communicated […]. And for that communication we apologize. We never meant to upset you.”

Note that Adam Kramer, the Facebook employee who co-authored the study, made the same move three days earlier:

"our goal was never to upset anyone. I can understand why some people have concerns about it, and my coauthors and I are very sorry for the way the paper described the research and any anxiety it caused. In hindsight, the research benefits of the paper may not have justified all of this anxiety."

There’s nothing wrong with what we did, this says – if you (the public) are upset, it’s because you don’t really understand what we did, so our only flaw is we could have made a better effort at explaining.

Meanwhile, artists Lauren McCarthy released the Facebook Mood Manipulator, a chrome plugin that allows users to manipulate the total sentiment of their Facebook news feed.

Update II (July 3)

The PNAS just published an “Editorial Expression of Concern" by its editor in chief:

"Questions have been raised about the principles of informed consent and opportunity to opt out in connection with the research in this paper. The authors noted in their paper, [The work] was consistent with Facebooks Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research.When the authors prepared their paper for publication in PNAS, they stated that: Because this experiment was conducted by Facebook, Inc. for internal purposes, the Cornell University IRB [Institutional Review Board] determined that the project did not fall under Cornells Human Research Protection Program.This statement has since been confirmed by Cornell University.

Obtaining informed consent and allowing participants to opt out are best practices in most instances under the US Department of Health and Human Services Policy for the Protection of Human Research Subjects (the Common Rule). Adherence to the Common Rule is PNAS policy, but as a private company Facebook was under no obligation to conform to the provisions of the Common Rule when it collected the data used by the authors, and the Common Rule does not preclude their use of the data. Based on the information provided by the authors, PNAS editors deemed it appropriate to publish the paper. It is nevertheless a matter of concern that the collection of the data by Facebook may have involved practices that were not fully consistent with the principles of obtaining informed consent and allowing participants to opt out.” 

And the Facebook Loophole gapes ever-larger: Apparently, because Facebook is a private company, not only is it not beholden to the Common Rule, but also not to journal policies such as those of the PNAs, which actually do not expressly reference the Common Rule (as this statement implies), only the Declaration of Helsinki. In addition, it sets out explicit demands for IRB review and approval, as well as informed consent, the latter being even stricter than the Common Rule. If you are a private company, you can collect data and have your employees publish articles about it in scientific journals, without and ethics board approval or informed consent. That has the editor-in-chief “express concern”, but is the manifest logic of PNAS publishing (and not withdrawing) the article nevertheless.

Updates (July 4)

More larger “what’s it all about” framings: Micah L. Sifry tries to shift the debate, explaining “Why Facebook’s ‘Voter Megaphone’ Is the Real Manipulation to Worry About”: because it demonstrates how power in the age of “data-intensive politics” is massified among a few. Matt Pearce writes in the LA Times that the experiment “is symbolic of a power imbalance between companies and users”. Pete Etchell chimes in with the “don’t throw the baby out with the bath water” canon in the Guardian: yes, let’s regulate social media research better instead of banning this unique new source for insight into the human condition. Whitney Erin Boesel demands new standards for social research and new review processes, given how corporate research blurs existing boundaries (I agree). And the US privacy advocacy group Electronic Privacy Information Centre (Epic) has filed a complaint with FTC demanding that it investigate Facebook’s actions.

Updates (July 6)

Tarleton Gillespie has weighed in at Culture Digitally, taking a stance very similar to mine: The response to the Facebook study “represents[s] a deeper discomfort about an information environment where the content is ours but the selection is theirs.” We expect mass media to be curated, and interpersonal communication to be (faith)fully transmitted as is. Facebook created “a third category” with curated interpersonal communication, which mismatches our mental models of interpersonal communication. This “gap between expectation and reality” is reinforced by Facebook, which maintains the appearance of being a mere conduit.

I would add: Not only is curation as such new: Personalized curation is something we haven’t learned mental models for, haven’t formed expectations and norms around in any medium.

Updates (July 10)

According to ex-Facebook data scientist Andrew Ledvina, during the years in which the study was conducted, Facebook had no internal review and approval for experiments – only posterior review by the PR department whether a study should be publicised. Meanwhile, Virginia senator Mark Werner has asked the Federal Trade Commission to investigate. the Facebook study

Kate Crawford thinks that the Facebook study poses the question “How do you develop ethical practices for perpetual experiment engines?”, and more broadly, how to think through issues of power, deception, and autonomy in experimentation when its technological bases shift. She suggests to try moving from blanket Terms of Service-legitimized experiments to explicit opt-in experimental panels. Ed Felten provides a solid philosophical unpacking of some of the common stances around the ethics of A/B testing. Whether A/B testing is common practice or common knowledge doesn’t make it ethical, because we can imagine A/B tests that are unethical nonetheless – like falsely telling teenagers there parents are dead in a Facebook post. Also, ethically, implied consent might not be real consent, e.g. absent the choice of not consenting. Finally, Michael Bernstein asks HCI and social computing researchers to speak up, lest the debate and subsequent policy consequences are captured by others. He urges to adapt research regulation to the affordances of online environments, and to empirically study whether users actually know about A/B testing, and how they feel about it.

Updates (July 12)

Scott Robertson weighed in, suggesting that the big threat is that the study tarnishes the reputation of science as such, and suggests companies like Facebook create sub-panels whose users willingly agree to being part of experiments. Most interestingly, he points at the issue of framing with this poignant sentence: “It seems there is a cultural agreement in our society that a magic ring exists around consumer research such that it is OK to be studied surreptitiously as consumers but not OK to be studied for any other purpose.”

Couldn’t have said it better.

"The idea of innovation is the idea of progress stripped of the aspirations of the Enlightenment, scrubbed clean of the horrors of the twentieth century, and relieved of its critics. Disruptive innovation goes further, holding out the hope of salvation against the very damnation it describes: disrupt, and you will be saved."
Games are the aesthetic form of instrumental reason. »

Half an hour into an argument, I caught myself thinking “oh, great, this is your one thing all over again.” It often seems like all we have is this one idea (or question), and no matter how valiant our effort, all we ever do is rediscover and reapply and re-articulate that idea again and again. For Frank Lantz, it is that games are the aesthetic form of instrumental reason. A crazy aspiration nowhere realised yet, you might say. Not a bad one idea to have, I’d reply. And this is its most beautiful and eloquent expression to date.

"The critic is he who can translate into another manner or a new material his impression of beautiful things."
-

Oscar Wilde, The Picture of Dorian Gray

To me, this distills the realisation of five years of studying Comparative Literature: criticism, “close reading,” call-it-what-you-will is not a scientific endeavour; it is an artistic practice – that is its status and value, that is what we should aspire to and judge our efforts by: whether we succeeded to retrace our experience with a piece of art into something that enriches other people’s experience of it, or creates a new one.

Beauty poses out of context" by Larissa Seilern. To quote from the project description:

The question I am trying to answer is how we can use performance as a tool to interrogate specific social phenomena, and potentially instigate social change. I will focus on a chosen space, which will act as my ‘stage’. Within this space I will introduce ‘performative interventions’ directly into the social fabric, as a way to transform people’s behaviour and relationship to their environment.

Another way of saying this would be “ethnomethodology as design practice,” or “frame analysis in action.” It could also perfectly be a piece of fluxus or performance art. Sociological and art sensitivities of the 1960s/70s reemerge as design practice. Frame change as Brechtian Verfremdung: Grafting beauty poses into the quotidian world on first sight makes us question the sanity of the performer (acting out of context). But if we guess her intention correctly, it throws into relief how artificial these poses are themselves. And it is a ready-made social research method. Not Garfinkling (i.e. trying to breach the routines of the current context to see what those routines are), but context shifting to get into view the rules of the single behaviour or object you shifted.

"After a long day that started with pervasive artificial intelligence, eggs, spinach & truffle three ways, & suspiciously well-curated cupcakes; continued with military logistics, a little light budgeting & Julian Schnabel inspired clothing; & culminated with Victoria Beckham taking to the decks in the museum, there’s really only one way to end the evening:
Coconuts."
- Honor Harger, teaching us how to pluck the day