Wednesday, October 22, 2014

Coming Face to Face with the New Normal in Internet Research

by Elizabeth Buchanan, PhD, Endowed Chair in Ethics, University of Wisconsin-Stout

On Thursday, October 30, PRIM&R will host a webinar, The Future of Internet Research: What We Can Learn from the Facebook Emotional Contagion Study, which will explore the Facebook emotional contagion study and some of the questions that it raised related to internet and social media research. In advance of that webinar, we are sharing different perspectives on the controversy. Last week, PRIM&R’s executive director, Elisa A. Hurley, PhD, explored the reasons for the public outcry, and in this week’s post, webinar presenter Elizabeth Buchanan, PhD, explains what the Facebook study can teach us about the “new normal” in internet research. 

When news of the Facebook contagion study hit, I was presenting a session on research ethics to the VOX-Pol summer school at Dublin City University. I had intended to discuss the Belfast Project as an example of social, behavioral, and educational research gone badly—indeed, this project had international intrigue, raised serious issues related to participant privacy and consent, and pushed research regulations to their limits. But, suddenly, with news of Facebook’s newsfeed manipulations, there was a hot new case in internet research to consider. The first responders were quick to call attention to the “creepiness” of the study (the name of the article itself might be responsible for the creepiness factor: “Experimental evidence of massive-scale emotional contagion through social networks”); those responses were quickly followed by questions about user/participant consent and the ethics of deception research. Initial reactions seemed to center around several points:
  • This research was definitely “wrong”—individuals should have been told about the research. Deception research is okay, but there are “rules.”
  • Facebook isn’t a regulated entity and doesn’t have to follow “the rules.”
  • Facebook should exercise some ethical considerations in its research—some called for it to “follow the rules,” even if they aren’t what we are used to.
  • Facebook does have rules; they are called “terms of service.” Did Facebook violate something else, like user trust? 
  • Facebook does research pervasively, ubiquitously, and continuously. “Everyone” knows that.
  • Why is this case different? Because the line into an academic, peer-reviewed journal was crossed with—gasp—industry research? 
  • Why didn't an earlier version of the study, in 2012, raise such fuss?
It has been a few months since the initial fallout from the study, and we have seen interesting afterthoughts and nuanced thinking on the study from the academic press, popular media, tech journals, and more. For example, there was Mary Gray’s panel titled “When Data Science & Human Subject Research Collide: Ethics, Implications, Responsibilities,” and the Proceedings of the National Academy of Sciences published “Learning as We Go: Lessons from the Publication of Facebook’s Social-Computing Research.” There was also a joint reaction from 27 ethicists in Nature, which argued against the backlash in the name of rigorous science. And, to empirically assess if a “similar” population of users—namely, Amazon Turkers—would respond to research ethics violations in ways similar to the subjects of the contagion study, Microsoft’s Ethical Research Project conducted its own study.

I’ve been studying internet research for a long time—at least a long time in internet years, which are quite similar to dog years. I remember the AOL incident and the “Harvard Privacy Meltdown.” Those, and now the contagion study, are internet research ethics lore. They are perfect case studies.

Recently, I had the good pleasure of presenting on the contagion study at the Society of Clinical Research Associates’ Social Media Conference. There were some in the room who were unaware of the controversy. Others were of the mind that we should expect this sort of thing. And, some were aghast (my anecdotal results align, more or less, with what Microsoft’s Ethical Research Project systematically found!). And, recently, I talked with yet another reporter, but this one asked a very pointed question: “Why are people so upset?”

One reason is that we have finally come face to face(book) with the reality of algorithmic manipulation—we are now users and research participants, always and simultaneously. If we stopped to think about every instance of such manipulation on any social media platform, our experiences as users would be dramatically different. But it is happening, and our interactions on the internet are the subject of constant experimentation. As OKCupid reminded us: “…guess what, everybody: if you use the internet, you’re the subject of hundreds of experiments at any given time, on every site. That’s how websites work.” Welcome to the era of big data research, composite identities, and the “new” frame of research.

The Facebook study also sheds light on a clash between the “human subject,” as defined in existing research regulations, and the data persona that we develop though our interaction with social media. Traditional research regulations are being challenged by new forms of participant recruitment, engagement, and analyses that defy a strict alignment of regulations to praxis. The current era of internet research will only reveal these clashes more and more, and in many ways, the contagion study is a perfect example of this "new normal" in internet research ethics. I mean a few things by this.

First, we've been seeing a morphing of the research space for many years in the face of social media. It is becoming more and more difficult to isolate "research" from every day activities and experiences, and it is increasingly more challenging to distinguish the researcher from other stakeholders in the research enterprise. Similarly, distinguishing between users, or consumers/customers, and research subjects is becoming more complicated. The research spaces of today’s social media are ubiquitous and pervasive.

Second, for years, the computer sciences and, more specifically, computer security research, have been engaged in various forms of research like the contagion study and have been publishing their results widely. However, these researchers have stayed, in general, outside the fray of human subjects research. The dominance of Facebook is obviously a variable in this case, but, as others have stated, this is certainly not the first, nor the last time this kind of research will be conducted.

Third, this case calls into clear view the importance of considering terms of service (and recognizing their inherent limitations vis-a-vis the regulations and the application of the regulations to third-party controlled research) in relation to "consent.” We must acknowledge how differently conceived and understood "consent" is under the framework of human subjects research versus other legal settings. Consider, for instance, that while there are alternatives to research participation, the terms of service acknowledgement is a legal requirement with only one alternative: Don’t use the service. As users agree to the terms of service of various sites, new challenges related to internet research arise. For example, a site may be used as a research venue by a researcher, but the consent conditions are in direct contrast with the site’s terms of service (e.g., research participants are told that their data will discarded after some time, when the terms of service state otherwise). As our research spaces merge, it is critical to understand this distinction between consent and terms of service and conceptualize a flexible approach that fulfills the letter and spirit of ethics and law.

Fourth, the new normal of internet research is also one of identifiability. From the technical infrastructure to the norms of social media (e.g., the norm of sharing), individuals are intentionally and unintentionally contributing to the sharing and use of data across users, platforms, venues, and domains. Within this framework, we are seeing an increase in non-consensual access to data.  Data streams are complex, intermingled, and always in flux, and it is, in IRB lingo, becoming impracticable to seek and/or give consent in this environment (think big data, of course). From these streams, and from these diverse data, we can extrapolate theories, patterns, and correlations to individuals and communities. We, individually and collectively, are identifiable by our data streams, hence the targeted ads, newsfeed content, recommendations, and so on, that determine our online experiences. Our online experiences could be very different, and to this end, researchers are studying the ethics of algorithms very closely now. But, the days of anonymous FTP (file transfer protocol) do seem a thing of the past. Anonymous data is simply not valuable in the new normal of internet research.

The Facebook study also demonstrates the importance of reconsidering group harms, secondary subjects, and research bystanders—the internet of today is not about the individual as much as it is about patterns, groups, connections, relationships, and systems of actors and networks. Within this complex nexus, the notion of consent is changing, as is the notion of “minimal risk.” Our every day realities now include the risks of data manipulation, data sharing, aggregation, and others. Our consent is more often implicit, and that long-standing notion of practicability is ever more important.

In this nexus, we are finding a space for communication between and among researchers of all walks. But, once again, I am brought back to a most fundamental question in research: “What does the research subject get out of it?

Where do we, the collective research community, go from here? What do the feds think about this? Facebook issued new research guidelines, but are they enough?  Would a joint statement from the Federal Trade Commission and the Office for Human Research Protections be useful? What does this case, and the collision of customers and subjects, mean to them? As we academics scurry for special issues and conference panels on the implications of the contagion study, does anyone else, including industry researchers and the subjects of their research, want to weigh in?

Or, will this be simply cast to the cannons of internet research ethics lore? I know that I, for one, am eager to continue the conversation that this study started. To that end, I invite you join me on Thursday, October 30, for a webinar titled The Future of Internet Research: What We Can Learn from the Facebook Emotional Contagion Study.

Please note: Portions of this post were previously published on the IRB Forum; I thank the many contributors across the internet for their thoughts and insights.

Monday, October 20, 2014

Remembering Felix A. Gyi: A Wise, Generous, and Kind Leader

Felix A. Khin-Maung-Gyi, PharmD, MBA, CIP, RAC, an active and valued leader in the field of human subjects protections and proud family man, passed away on October 2, 2014.

A pharmacist by training, Dr. Gyi received a bachelor’s degree in pharmacy from the University of Maryland School of Pharmacy in 1983, and went on to receive his doctorate in the subject from Duquesne University in 1986. He also obtained a master’s in business administration from Loyola University Maryland (The Baltimore Sun).

Among his many accomplishments, Dr. Gyi founded Chesapeake Research Review LLC, an independent IRB, in 1993, and served as its CEO for more than 20 years. During his time at Chesapeake IRB, Dr. Gyi helped raise important questions about the growing role of central IRBs.

Dr. Gyi was also instrumental in the creation of the Certified IRB Professional (CIP®) credential. Gary Chadwick, PharmD, MPH, another of the credential’s founders, spoke to Dr. Gyi’s contributions: “He was one of the first persons I tapped to get the CIP credential off the ground back in 1999. I saw firsthand his dedication and extensive knowledge, which, when put with his easy going nature and great sense of humor and fun, produced outstanding results and spurred others to excel.”

Dr. Gyi also served alongside Gary Chadwick, Susan Delano, Marianne Elliot, Nancy Hibser, Moira Keane, Susan Kornetsky, Peter Marshall, Daniel Nelson, and Lucy Pearson as inaugural members of the Council for Certification of IRB Professionals. Ms. Delano reflected: “He could always be relied on for his sound judgment and in-depth knowledge of the complex regulations and guidance governing research involving human subjects. He demonstrated a deep commitment to the ethical conduct of research and the welfare of research subjects. His positive attitude, generous spirit and sense of humor were very much appreciated by his fellow Council members and the IRB community.”

Dr. Gyi also offered his expertise on issues related to human subjects protections to the Secretary’s Advisory Committee on Human Research Protections (SACHRP), for which he served as a member from 2003 to 2006. Later, he was also a member of SACHRP’s Subpart A Subcommittee, charged with reviewing and making recommendations related to the regulations found at 45 CFR 46 Subpart A.

Throughout his career, Dr. Gyi was a sought-after speaker both in the United States and abroad. His ability to capture the spirit of human subjects protections served as a passionate reminder to all about the importance of such work. At the 2013 Association of Clinical Research Professionals Global Conference and Exhibition, Dr. Gyi spoke on a panel  titled, “Should We Exploit Hope to Enhance Enrollment of Oncology Research Participants?”, about Nicole Wan, a 19-year old student at the University of Rochester who died as a result of her participation in a non-therapeutic research study. He lamented:
We failed Nicole because we didn’t stop to think about what was in her best interest. Would it not have been simpler if some nurse had said to the physician: ‘Doc, I’ve seen you do this [procedure] hundreds of times—this is particularly difficult. Let’s not distress the poor lady anymore; give her $75 and let’s call it a day.’ 
But, we didn’t do that, and I believe we failed because we were stuck on the culture of obtaining data, and, to use a phrase that the first [Office of Human Research Protections] director, Greg Koski, used early on in his career, we were stuck on [a] ‘culture of compliance.’ We did not shift to a culture of caring, or a culture of excellence, in a way that [would have allowed] us to do what we need[ed] to do in a societally responsible manner.
Dr. Gyi’s unique ability to elucidate the importance of human subjects protections has ensured that his legacy will endure. The countless individuals who had an opportunity to hear him present over the years were without a doubt struck by the dedication and commitment with which he spoke about human subjects protections.

“Felix was a tireless worker and supporter of human subject protection. He always made himself available for any organization or group that was trying to improve the system,” reflected Dr. Chadwick. Dr. Gyi will also long be remembered for his spirit and attitude, as Dr. Chadwick attested: “Felix was an absolute joy to be around–he always had a kind word and was supportive of family and friends. His generosity was boundless–he personally hosted many a dinner and reception for ‘official functions’ of organizations that didn’t have the funds to support this important professional networking or provide amenities.”

Immediately prior to his death, Dr. Gyi was elected to the PRIM&R Board of Directors. While Dr. Gyi was not aware that he had been elected to the board at the time of his passing, he was aware of his nomination and indicated that he was eager to contribute. The PRIM&R Board and staff were looking forward to welcoming Dr. Gyi to the Board, and we feel a deep the sense of sorrow that he will not be joining us come January.

Ethical, humble, and generous, Dr. Gyi was an extraordinary leader, whose impact can be felt in the way the regulations governing the conduct of research with human subjects are interpreted and operationalized throughout the research enterprise. He touched the lives of many in the field and his wisdom, warmth, and humanity will be deeply missed.

Wednesday, October 15, 2014

Big Data, Commercial Research, and the Protection of Subjects

by Elisa A. Hurley, PhD, Executive Director

Much has been written in the past few months—pro and con—about the results of the Facebook emotional contagion study published in June in the Proceedings of the National Academy of Sciences. The study manipulated the News Feeds of 700,000 unknowing Facebook users for a week in January 2012 by adjusting Facebook’s existing algorithm to over-select for either more positive or more negative language in posts. At the end of the week, the results showed that these users were more likely to follow the trend of their manipulated feed, that is, to use more positive or negative language in their own posts, respectively, based on their study grouping. Additionally, the study revealed that lowering the emotional content of posts overall caused users with affected News Feeds to post fewer words in their own statuses.

The public reaction to the revelation of the study in June was swift, loud, and dramatic. I myself was surprised by the uproar and still am not sure what to make of it.

Those who have written about the study in scholarly and popular media have voiced differing opinions about whether adequate informed consent for the study was provided via Facebook’s Terms of Service, as well as whether informed consent was even needed. Further debate has centered on whether the study required IRB review. And still other commentary has zeroed in on the merits of the research itself. As James Grimmelmann, a law professor from the University of Maryland said (quoted in The Atlantic, June 2014):
[The Facebook study] failed [to meet certain minimum standards]…for a particularly unappealing research goal: We wanted to see if we could make you feel bad without you noticing. We succeeded.
But are these the reasons users have been so incensed? I’m not sure.

Consider that, by their own admission, Facebook routinely manipulates its users’ News Feeds, filtering 1,500 possible news items down to 300 each time a user logs in. Many Facebook users object to this filtering (wanting instead to see everything and choose the content they engage with themselves), but that’s not enough to make the majority of account holders abandon or deactivate their accounts. The algorithm is also used to deliver related advertising content to users, and words in posts are parsed to target that advertising precisely to users’ recent activity: post enough about being on the treadmill, and your ads begin to feature running gear and related products. Yet again, no hue and cry, no mass exodus from Facebook by its billion plus worldwide users.

So it would seem that commercial audience manipulation—the basis for every marketing campaign the world over—is held to a lower standard than the presumably more noble and societally beneficial work of acquiring knowledge for the larger public good. Why is that?

The outcry about the study might be due to several factors: the perceived hubris of publishing a research paper about what perhaps should have remained internal commercial research; the fact that hundreds of thousands of Facebook users are left wondering if they were part of the experiment (as of this writing, there has been no indication that Facebook debriefed the subjects whose News Feed were affected); or the realization by those users and others that Facebook is able and willing to manipulate its user population in a variety of ways, and for purposes other than product enhancement or selling goods and services. In the words of Robinson Meyer (The Atlantic, June 2014):
And consider also that from this study alone Facebook knows at least one knob to tweak to get users to post more words on Facebook. [Author's emphasis] 
Perhaps we’re so accustomed to commercial manipulation that the instances that occur in our everyday lives—the placement of items on grocery store shelves, the tempo of music in shopping malls during the holidays, commercials for junk food peppered liberally through children’s television programming—don’t register as manipulative. Perhaps, too, we’re so used to them that we don’t even realize the effects they have on us. Some have suggested that the Facebook study and the public reaction to it should make us question our complacency about how our information is provided to and used by commercial entities. As Janet D. Stemwedel noted (Scientific American, June 2014):
Just because a company like Facebook may “routinely” engage in manipulation of a user’s environment, doesn’t make that kind of manipulation automatically ethical when it is done for the purposes of research. Nor does it mean that that kind of manipulation is ethical when Facebook does it for its own purposes. As it happens, peer-reviewed scientific journals, funding agencies, and other social structures tend to hold scientists building knowledge with human subjects to a higher ethical standard than (say) corporations are held to when they interact with humans. This doesn’t necessarily mean our ethical demands of scientific knowledge-builders are too high. Instead, it may mean our ethical demands of corporations are too low. [Author’s emphasis]
I think this is a point well taken. I also think there is an analogy to be drawn here to our collective attitudes about clinical care versus research. Consider the daily interaction between clinical care providers and patients. Patients trust doctors to make treatment decisions via prescriptions, referrals to specialists, and other interventions—some of which present more than minimal risk to a patient’s life or well-being. But not all doctors are equally knowledgeable, up-to-date on the current research, or without their own biases. And many of those decisions are made without any sort of consent process. It’s only when interventions—and sometimes the very same interventions, as in the case of comparative effectiveness research—are presented within the context of a research study that that the requirements for informed consent, and indeed an entire set of ethical questions and considerations, get triggered.

There are surely good reasons for this. Whether or not research is always inherently riskier to subjects than care is to patients—and I don’t believe it is—the very fact that one is participating in research, an enterprise whose goal is the creation of generalizable knowledge rather than personalized benefit, seems to me good reason for invoking a fairly robust ethical and regulatory machinery (though I acknowledge that the “machinery” we currently have in place may not be a good fit for much contemporary research). To make the parallel point to Professor Stemwedel’s, the fact that we seem to have different ethical standards or thresholds for research than for practice doesn’t, or doesn’t necessarily, mean that our standards for research are too high. Maybe it should, though, raise the question of whether our ethical standards for clinical practice are too low.

So, as with the Facebook case, I am left wondering, do we unfairly hold research to a higher ethical standard than we do clinical practice, or marketing practice?  And if so, are we, as some argue, thereby hindering important scientific progress?  Or does this highlight that we are we too lax about ethical considerations in other domains?  What do you think?

I invite and encourage you to join PRIM&R for a webinar on lessons learned from the Facebook study Thursday, October 30, at 1:00 PM ET. The Future of Internet Research: What We Can Learn from the Facebook Emotional Contagion Study, features Elizabeth Buchanan, PhDMary L. Gray, PhD, and Christian Sandvig, PhD, who will discuss the study and some of the commonly raised questions pertaining to internet and social media research, including: questions about how to classify social data; the ethical principles that accompany any such classification; how consent and debriefing strategies can be redesigned to scale up beyond the lab or field setting; how minimal risk can be assessed in online settings; and how to determine what constitutes publicly available information, communication, or social interaction versus private information, communication, or social interaction.

Thursday, October 2, 2014

Six Tips to Help You Get the Most Out of the 2014 Advancing Ethical Research Conference

by Meghan Timmel, Communications Coordinator

I recently heard PRIM&R’s annual Advancing Ethical Research (AER) Conference described by a member as the “three days a year where I don’t have to explain what I do for a living.” This sentiment is echoed by many in attendance at our annual conference, which provides all those involved with research ethics and human subject protections a chance to meet, reconnect, and work with others who share their daily experiences, struggles, and questions. It’s helpful. It’s re-energizing. And—perhaps most important—it’s reaffirming.

The conference can be an especially valuable experience for first-time attendees, whether new to the field, to PRIM&R, or just to AER. To help out first-time attendees and others who might find professional conferences a bit intimidating, I’d like to share a few tips to help ensure that your experience at this year’s conference—AER14—is a success:

Plan Ahead. Consider: What do I want to bring back to my office from AER14? You might have more than one goal, but let those goals—from making more contacts to getting face-to-face time with the feds—drive your conference planning. Then, review the conference schedule ahead of time and prioritize those sessions and events that you most want to attend.

Don’t be shy! Ask questions in your sessions. Approach others at networking events. Conference faculty members can offer valuable insight, and AER is full of people who—like you—want to connect with and learn from others in the field. If you’re an introvert, and this fills you with panic, you’re not alone. Take a look at one strategy that can help get you more comfortable with networking successfully. And, with this in mind…

Practice your elevator speech. You will introduce yourself to new people over and over and over again at AER14. Preparing a brief, polished, “Who am I?” statement will make this process so much easier. Not sure where to start? Here are some helpful tips.

Create a clear plan for tackling the exhibit hall. AER14’s Conference Connection will be home to meals, breaks, poster presentations, and exhibitors and supporters. With so much going on in one place, navigating The Conference Connection can be an overwhelming experience, but there’s a great deal of value to be gained by connecting with our exhibitors and supporters and exploring the poster presentations. Avoid any stress by reading the exhibitor and poster presentation lists beforehand and making a list of the companies you want to speak with and the posters you want to see. And don’t forget your map! We will be posting maps online in advance of the meeting, and you will also find them in the conference guide you receive onsite.

Pack wisely. As important as your conference schedule and networking strategy are, the little things can make a huge difference in your conference experience. In particular, don’t forget comfortable shoes and business cards as you pack for AER14.

Follow up! When you exchange business cards, make a few notes about your new contact on the back of their card. Those notes will come in handy when you follow up with all the new contacts you have made after the conference. Connect with the individuals and companies you met, whether via email, phone, or even LinkedIn. Strengthening those connections will help make your next AER Conference even better.

Do you have other tips for AER14 attendees? Questions about the conference? Let us know in the comments!

Still haven’t signed up for AER14? Register today. The regular registration rate ends on November 20

Friday, September 26, 2014

Diplomacy is a Must: An Interview with Steven O’Geary

by Megan Frame, Membership Coordinator 

Welcome to another installment of our featured member interviews where we introduce you to our members—individuals who work to advance ethical research on a daily basis. Please read on to learn more about their professional experiences, how membership helps connect them to a larger community, and what goes on behind-the-scenes in their lives!

Today we’d like to introduce you to Steven “Steve” O’Geary, assistant vice president for research compliance at Oklahoma State University-Stillwater in Stillwater, OK. 

Megan Frame (MF): When and why did you join the field?
Steve O’Geary (SO): I joined the field in January 2002 when my alma mater, the University of Oklahoma (OU), began restructuring its research compliance programs. I left a tenure-track faculty position to return to OU to serve as the director of the Office of Human Research Participant Protection. That was the beginning of an amazing career that led me to the University of California-Berkeley and now Oklahoma State University. Although I miss the challenges of engaging students in active learning, I have never regretted my decision to directly support the mission of IRBs. After 14 years, I remain earnest and diligent in my efforts to safeguard the rights and welfare of human subjects.

MF: What skills are particularly helpful in a job like yours?
SO: Diplomacy is a must. A person needs to possess solid interpersonal skills, especially the ability to communicate effectively with people from diverse backgrounds. You need a robust understanding of all pertinent regulations, and you also need to be patient, detail oriented, organized, and even-tempered. Enthusiasm, passion, and a sense of humor help, too.

MF: Tell us about one or more articles, books, or documents that have influenced your professional life. 
SO: Along with books like Bad Blood: The Tuskegee Syphilis Experiment and Acres of Skin: Human Experiments at Holmesburg Prison, which many of us read when joining the field, there are several books that have influenced me:
Some may seem far afield at least at first glance, but each one helped me to better appreciate the principles of the Belmont Report.

MF: Have any of the PRIM&R talks you’ve attended had a significant impact on your approach to your work? 
SO: Former Office for Human Research Protections (OHRP) director Greg Koski, MD, PhD, spoke at PRIM&R’s 2005 Human Research Protection Program Conference, one of the first PRIM&R conferences I attended. This was shortly after he had stepped down as director and returned to a faculty position at Harvard Medical School. During his presentation, he spoke about the challenges of being the first director of the newly created OHRP. He spoke very eloquently about the moral obligations that accompany research involving humans and about why IRBs matter. He also discussed the sad circumstances surrounding the death of Jesse Gelsinger, the first subject publicly identified as having died as a result of his participation in a gene therapy trial. Dr. Koski struck a chord with me. He helped me realize, even more so than I had previously, the significance of the professional decisions I make every day. I was left with the realization that I was a part of something far greater than myself.

MF: What advice have you found most helpful in your career?
SO: To surround myself with people who drive me forward rather than those who would hold me back. I’ve done this at Oklahoma State, where I have the privilege of working with outstanding professionals who truly function as a team. The sense of community that exists on our campus is stronger than I’ve experienced. Collectively, my colleagues and I have created an office where talented people want to work.

Thank you for being part of the membership community and sharing your story, Steven. We hope to see you at the 2014 AER Conference, where Dr. Koski will be joining us once again as a member of our conference faculty

If you’d like to learn more about becoming a member, please visit our website today.