Thursday, October 30, 2014

PRIM&R Releases White Paper on the Boundaries Between Research and Practice

by Hugh Tilson, MD, MPH, DrPH, PRIM&R Board Member and Co-Chair of the PRIM&R Project on the Boundary Between Research and Practice

Please join the PRIM&R Project on the Boundary Between Research and Practice in celebrating the release of a white paper titled Health-Related Activities Along the Boundary Between Research and Practice: When to Take Alternate Approaches to Providing Ethical Oversight. The paper, which is now available on PRIM&R’s website, summarizes a three-year effort to develop guidance for persons making decisions regarding the need for ethical review or oversight of health-related activities conducted along the boundary between research and practice.

Whether you work with an IRB or in any one of the four domains of health practice identified in the report that share a sometimes porous boundary with research—namely, (1) innovative medical and surgical clinical interventions, (2) public health practices, (3) community-engaged health activities, or (4) quality assurance/quality improvement activities—you know that determining whether a project requires formal review can be especially challenging. To help you address those challenges, we have developed a set of recommendations regarding ethics review aimed at assisting research professionals who regularly encounter activities that contain elements that resemble experimentation with human beings, but that fall short of the regulatory definition of research involving human subjects.

Drawing on the expertise of stakeholders from a wide range of disciplines and institutional settings, we examined the four domains of health practice mentioned above. In all of these disciplines, situations frequently arise in which practitioners are employing new approaches and strategies, and often—though not always—evaluating how those approaches and strategies work. Health-Related Activities Along the Boundary Between Research and Practice provides guidance for determining whether such activities require formal ethics review, as well as approaches to ethics review other than appeal to an often over-worked and ill-suited-for-the-context IRB.

In conjunction with the release of this white paper, we are also pleased to be able to make available a series of case studies, which were submitted by project participants and PRIM&R members. In their current form, the cases provide examples of issues encountered along the boundary between research and practice. Over the course of the next year, we will work to further organize, format, and refine these cases and develop a framework for analyzing them in relation to the recommendations laid out in our report. We appreciate those who have already submitted cases to this collection, and as we continue to grow this component of our project, we plan to solicit additional cases as well.

There are also a number of sessions planned for the 2014 Advancing Ethical Research Conference that will address “boundary” issues. These cases have been labeled with special language, and can be identified by performing a keyword search of the interactive conference schedule for the phrase “boundary between research and practice.”

To further share the strategies and suggestions put forward in Health-Related Activities Along the Boundary Between Research and Practice, members of the PRIM&R Project on the Boundary Between Research and Practice will work with the facilitators of those sessions to ensure that they are aware of this resource, so that they, and you, can benefit from the recommendations offered in the report, and be part of our efforts toward continuous quality improvement. As you make use of the white paper and put our recommendations into practice, we encourage you to share your thoughts and suggestions and, of course, send us any case studies that you might have.

Finally, a word about this milestone itself. The PRIM&R Board of Directors is committed to being active in public policy matters, a goal that we achieve, in part, by commenting on regulatory and legislative proposals and issues where we believe that our unique perspective might be helpful. As part of our continued commitment to public policy, we promise to be proactive—to look for areas where the field seems to be in need of guidance or direction and to do our best to help fill that void. As a first step, with input from many of you, we chose to examine the boundaries between research and practice, and we are pleased to make available to you the result of that effort. We hope it does justice to the many remarkable contributions from so many of you, for, after all, your engagement is what makes PRIM&R such a priceless learning community. Thanks for all you do to make it so.

Wednesday, October 29, 2014

Ebola, an (Un)ethical Crisis

by Brandon Brown, MPH, PhD, Assistant Professor and Director of the Global Health Research, Education, and Translation (GHREAT) Initiative, University of California-Irvine Program in Public Health

So far, the current Ebola epidemic has resulted in more than 9,000 cases and 4,500 related deaths in affected West African countries, with additional cases now in the United States and Europe. As we learn more, the Centers for Disease Control and Prevention is updating its website with new information, and we find ourselves inundated with reports from every major news channel, on the internet, and in our favorite print publications. Ebola is an obvious priority in the US, as signaled by the recent appointment of an “Ebola czar” to guide us in our preparations and response to the epidemic.

Still, there are several outstanding ethical questions that have arisen as a result of the Ebola outbreak:
  1. Why did it take so long for the US to respond to the West African epidemic, and are we ethically obligated to provide assistance? What are our obligations to strengthen the health infrastructure in West Africa and help ensure that it’s prepared for future threats?

  2. How do we ensure that fast-tracked trials meet minimum ethical standards and that resulting vaccines are safe and effective? How do we determine who should be prioritized to receive the resulting vaccines?

  3. What do we do about the other epidemics still ravaging the affected countries? Will it be one step forward in preventing Ebola, but one step back for the big three: HIV, tuberculosis, and malaria? Should Ebola be the economic priority at this time?

  4. What happens to the 9,000 non-American, infected individuals who don’t have the luxury of being evacuated from affected countries in hopes of adequate treatment and care? 
Some of these issues were eloquently asked in a recent article in the Annals of Internal Medicine. One key in addressing these concerns and reversing the epidemic may be education about the virus. Lack of understanding about how Ebola is spread can result in fear and stigma, especially with a virus that is as virulent as Ebola, and a screening process that can take five days or more. Perhaps new rapid testing technology in development can help with this.

And what about the ethical issues in the United States? With news of the first case of transmission in the US and reports of lax guidelines for the prevention of transmission, hospital preparation and training seem to be at the top of the domestic priority list. For instance, the recent arrival of a New York doctor who treated Ebola patients in Guinea resulted in rules for automatic quarantine for doctors working with Ebola patients abroad. We are still learning, however, how to prevent Ebola transmission as time goes on, and US health care workers may be at the biggest risk of infection. We have an ethical imperative to protect them. Until recently there were no guidelines on how to use personal protective equipment in a robust and effective manner. Before the first US case, nurses in Nevada sensed this vulnerability and staged a “die in” on September 24, to gain attention and support for Ebola preparation. Another question on the radar is whether to ban flights from other Ebola affected countries to prevent a domestic outbreak.

Many of these outstanding questions will be discussed at the 2014 Advancing Ethical Research (AER) Conference during a session titled “Ethics During the Time of Ebola,” which will be held on December 6 from 11:15 AM to 12:30 PM. Nancy E. Kass, ScD, Bavon Mupenda, MPH, Aminu A. Yakubu, and I will explore the ethical concerns related to the Ebola epidemic in West Africa and the United States, including the issues faced by health care workers; concerns related to the use of vaccines and experimental drugs during the Ebola outbreak including black market treatments; the potential effect of misunderstandings and stigma about Ebola; and questions related to quarantine rules and resource allocation. I hope that you will consider joining us as we examine the ethical landscape of the current epidemic.

To learn more about the 2014 AER Conference or to register, please visit our website

Wednesday, October 22, 2014

Coming Face to Face with the New Normal in Internet Research

by Elizabeth Buchanan, PhD, Endowed Chair in Ethics, University of Wisconsin-Stout

On Thursday, October 30, PRIM&R will host a webinar, The Future of Internet Research: What We Can Learn from the Facebook Emotional Contagion Study, which will explore the Facebook emotional contagion study and some of the questions that it raised related to internet and social media research. In advance of that webinar, we are sharing different perspectives on the controversy. Last week, PRIM&R’s executive director, Elisa A. Hurley, PhD, explored the reasons for the public outcry, and in this week’s post, webinar presenter Elizabeth Buchanan, PhD, explains what the Facebook study can teach us about the “new normal” in internet research. 

When news of the Facebook contagion study hit, I was presenting a session on research ethics to the VOX-Pol summer school at Dublin City University. I had intended to discuss the Belfast Project as an example of social, behavioral, and educational research gone badly—indeed, this project had international intrigue, raised serious issues related to participant privacy and consent, and pushed research regulations to their limits. But, suddenly, with news of Facebook’s newsfeed manipulations, there was a hot new case in internet research to consider. The first responders were quick to call attention to the “creepiness” of the study (the name of the article itself might be responsible for the creepiness factor: “Experimental evidence of massive-scale emotional contagion through social networks”); those responses were quickly followed by questions about user/participant consent and the ethics of deception research. Initial reactions seemed to center around several points:
  • This research was definitely “wrong”—individuals should have been told about the research. Deception research is okay, but there are “rules.”
  • Facebook isn’t a regulated entity and doesn’t have to follow “the rules.”
  • Facebook should exercise some ethical considerations in its research—some called for it to “follow the rules,” even if they aren’t what we are used to.
  • Facebook does have rules; they are called “terms of service.” Did Facebook violate something else, like user trust? 
  • Facebook does research pervasively, ubiquitously, and continuously. “Everyone” knows that.
  • Why is this case different? Because the line into an academic, peer-reviewed journal was crossed with—gasp—industry research? 
  • Why didn't an earlier version of the study, in 2012, raise such fuss?
It has been a few months since the initial fallout from the study, and we have seen interesting afterthoughts and nuanced thinking on the study from the academic press, popular media, tech journals, and more. For example, there was Mary Gray’s panel titled “When Data Science & Human Subject Research Collide: Ethics, Implications, Responsibilities,” and the Proceedings of the National Academy of Sciences published “Learning as We Go: Lessons from the Publication of Facebook’s Social-Computing Research.” There was also a joint reaction from 27 ethicists in Nature, which argued against the backlash in the name of rigorous science. And, to empirically assess if a “similar” population of users—namely, Amazon Turkers—would respond to research ethics violations in ways similar to the subjects of the contagion study, Microsoft’s Ethical Research Project conducted its own study.

I’ve been studying internet research for a long time—at least a long time in internet years, which are quite similar to dog years. I remember the AOL incident and the “Harvard Privacy Meltdown.” Those, and now the contagion study, are internet research ethics lore. They are perfect case studies.

Recently, I had the good pleasure of presenting on the contagion study at the Society of Clinical Research Associates’ Social Media Conference. There were some in the room who were unaware of the controversy. Others were of the mind that we should expect this sort of thing. And, some were aghast (my anecdotal results align, more or less, with what Microsoft’s Ethical Research Project systematically found!). And, recently, I talked with yet another reporter, but this one asked a very pointed question: “Why are people so upset?”

One reason is that we have finally come face to face(book) with the reality of algorithmic manipulation—we are now users and research participants, always and simultaneously. If we stopped to think about every instance of such manipulation on any social media platform, our experiences as users would be dramatically different. But it is happening, and our interactions on the internet are the subject of constant experimentation. As OKCupid reminded us: “…guess what, everybody: if you use the internet, you’re the subject of hundreds of experiments at any given time, on every site. That’s how websites work.” Welcome to the era of big data research, composite identities, and the “new” frame of research.

The Facebook study also sheds light on a clash between the “human subject,” as defined in existing research regulations, and the data persona that we develop though our interaction with social media. Traditional research regulations are being challenged by new forms of participant recruitment, engagement, and analyses that defy a strict alignment of regulations to praxis. The current era of internet research will only reveal these clashes more and more, and in many ways, the contagion study is a perfect example of this "new normal" in internet research ethics. I mean a few things by this.

First, we've been seeing a morphing of the research space for many years in the face of social media. It is becoming more and more difficult to isolate "research" from every day activities and experiences, and it is increasingly more challenging to distinguish the researcher from other stakeholders in the research enterprise. Similarly, distinguishing between users, or consumers/customers, and research subjects is becoming more complicated. The research spaces of today’s social media are ubiquitous and pervasive.

Second, for years, the computer sciences and, more specifically, computer security research, have been engaged in various forms of research like the contagion study and have been publishing their results widely. However, these researchers have stayed, in general, outside the fray of human subjects research. The dominance of Facebook is obviously a variable in this case, but, as others have stated, this is certainly not the first, nor the last time this kind of research will be conducted.

Third, this case calls into clear view the importance of considering terms of service (and recognizing their inherent limitations vis-a-vis the regulations and the application of the regulations to third-party controlled research) in relation to "consent.” We must acknowledge how differently conceived and understood "consent" is under the framework of human subjects research versus other legal settings. Consider, for instance, that while there are alternatives to research participation, the terms of service acknowledgement is a legal requirement with only one alternative: Don’t use the service. As users agree to the terms of service of various sites, new challenges related to internet research arise. For example, a site may be used as a research venue by a researcher, but the consent conditions are in direct contrast with the site’s terms of service (e.g., research participants are told that their data will discarded after some time, when the terms of service state otherwise). As our research spaces merge, it is critical to understand this distinction between consent and terms of service and conceptualize a flexible approach that fulfills the letter and spirit of ethics and law.

Fourth, the new normal of internet research is also one of identifiability. From the technical infrastructure to the norms of social media (e.g., the norm of sharing), individuals are intentionally and unintentionally contributing to the sharing and use of data across users, platforms, venues, and domains. Within this framework, we are seeing an increase in non-consensual access to data.  Data streams are complex, intermingled, and always in flux, and it is, in IRB lingo, becoming impracticable to seek and/or give consent in this environment (think big data, of course). From these streams, and from these diverse data, we can extrapolate theories, patterns, and correlations to individuals and communities. We, individually and collectively, are identifiable by our data streams, hence the targeted ads, newsfeed content, recommendations, and so on, that determine our online experiences. Our online experiences could be very different, and to this end, researchers are studying the ethics of algorithms very closely now. But, the days of anonymous FTP (file transfer protocol) do seem a thing of the past. Anonymous data is simply not valuable in the new normal of internet research.

The Facebook study also demonstrates the importance of reconsidering group harms, secondary subjects, and research bystanders—the internet of today is not about the individual as much as it is about patterns, groups, connections, relationships, and systems of actors and networks. Within this complex nexus, the notion of consent is changing, as is the notion of “minimal risk.” Our every day realities now include the risks of data manipulation, data sharing, aggregation, and others. Our consent is more often implicit, and that long-standing notion of practicability is ever more important.

In this nexus, we are finding a space for communication between and among researchers of all walks. But, once again, I am brought back to a most fundamental question in research: “What does the research subject get out of it?

Where do we, the collective research community, go from here? What do the feds think about this? Facebook issued new research guidelines, but are they enough?  Would a joint statement from the Federal Trade Commission and the Office for Human Research Protections be useful? What does this case, and the collision of customers and subjects, mean to them? As we academics scurry for special issues and conference panels on the implications of the contagion study, does anyone else, including industry researchers and the subjects of their research, want to weigh in?

Or, will this be simply cast to the cannons of internet research ethics lore? I know that I, for one, am eager to continue the conversation that this study started. To that end, I invite you join me on Thursday, October 30, for a webinar titled The Future of Internet Research: What We Can Learn from the Facebook Emotional Contagion Study.

Please note: Portions of this post were previously published on the IRB Forum; I thank the many contributors across the internet for their thoughts and insights.

Monday, October 20, 2014

Remembering Felix A. Gyi: A Wise, Generous, and Kind Leader

Felix A. Khin-Maung-Gyi, PharmD, MBA, CIP, RAC, an active and valued leader in the field of human subjects protections and proud family man, passed away on October 2, 2014.

A pharmacist by training, Dr. Gyi received a bachelor’s degree in pharmacy from the University of Maryland School of Pharmacy in 1983, and went on to receive his doctorate in the subject from Duquesne University in 1986. He also obtained a master’s in business administration from Loyola University Maryland (The Baltimore Sun).

Among his many accomplishments, Dr. Gyi founded Chesapeake Research Review LLC, an independent IRB, in 1993, and served as its CEO for more than 20 years. During his time at Chesapeake IRB, Dr. Gyi helped raise important questions about the growing role of central IRBs.

Dr. Gyi was also instrumental in the creation of the Certified IRB Professional (CIP®) credential. Gary Chadwick, PharmD, MPH, another of the credential’s founders, spoke to Dr. Gyi’s contributions: “He was one of the first persons I tapped to get the CIP credential off the ground back in 1999. I saw firsthand his dedication and extensive knowledge, which, when put with his easy going nature and great sense of humor and fun, produced outstanding results and spurred others to excel.”

Dr. Gyi also served alongside Gary Chadwick, Susan Delano, Marianne Elliot, Nancy Hibser, Moira Keane, Susan Kornetsky, Peter Marshall, Daniel Nelson, and Lucy Pearson as inaugural members of the Council for Certification of IRB Professionals. Ms. Delano reflected: “He could always be relied on for his sound judgment and in-depth knowledge of the complex regulations and guidance governing research involving human subjects. He demonstrated a deep commitment to the ethical conduct of research and the welfare of research subjects. His positive attitude, generous spirit and sense of humor were very much appreciated by his fellow Council members and the IRB community.”

Dr. Gyi also offered his expertise on issues related to human subjects protections to the Secretary’s Advisory Committee on Human Research Protections (SACHRP), for which he served as a member from 2003 to 2006. Later, he was also a member of SACHRP’s Subpart A Subcommittee, charged with reviewing and making recommendations related to the regulations found at 45 CFR 46 Subpart A.

Throughout his career, Dr. Gyi was a sought-after speaker both in the United States and abroad. His ability to capture the spirit of human subjects protections served as a passionate reminder to all about the importance of such work. At the 2013 Association of Clinical Research Professionals Global Conference and Exhibition, Dr. Gyi spoke on a panel  titled, “Should We Exploit Hope to Enhance Enrollment of Oncology Research Participants?”, about Nicole Wan, a 19-year old student at the University of Rochester who died as a result of her participation in a non-therapeutic research study. He lamented:
We failed Nicole because we didn’t stop to think about what was in her best interest. Would it not have been simpler if some nurse had said to the physician: ‘Doc, I’ve seen you do this [procedure] hundreds of times—this is particularly difficult. Let’s not distress the poor lady anymore; give her $75 and let’s call it a day.’ 
But, we didn’t do that, and I believe we failed because we were stuck on the culture of obtaining data, and, to use a phrase that the first [Office of Human Research Protections] director, Greg Koski, used early on in his career, we were stuck on [a] ‘culture of compliance.’ We did not shift to a culture of caring, or a culture of excellence, in a way that [would have allowed] us to do what we need[ed] to do in a societally responsible manner.
Dr. Gyi’s unique ability to elucidate the importance of human subjects protections has ensured that his legacy will endure. The countless individuals who had an opportunity to hear him present over the years were without a doubt struck by the dedication and commitment with which he spoke about human subjects protections.

“Felix was a tireless worker and supporter of human subject protection. He always made himself available for any organization or group that was trying to improve the system,” reflected Dr. Chadwick. Dr. Gyi will also long be remembered for his spirit and attitude, as Dr. Chadwick attested: “Felix was an absolute joy to be around–he always had a kind word and was supportive of family and friends. His generosity was boundless–he personally hosted many a dinner and reception for ‘official functions’ of organizations that didn’t have the funds to support this important professional networking or provide amenities.”

Immediately prior to his death, Dr. Gyi was elected to the PRIM&R Board of Directors. While Dr. Gyi was not aware that he had been elected to the board at the time of his passing, he was aware of his nomination and indicated that he was eager to contribute. The PRIM&R Board and staff were looking forward to welcoming Dr. Gyi to the Board, and we feel a deep the sense of sorrow that he will not be joining us come January.

Ethical, humble, and generous, Dr. Gyi was an extraordinary leader, whose impact can be felt in the way the regulations governing the conduct of research with human subjects are interpreted and operationalized throughout the research enterprise. He touched the lives of many in the field and his wisdom, warmth, and humanity will be deeply missed.

Wednesday, October 15, 2014

Big Data, Commercial Research, and the Protection of Subjects

by Elisa A. Hurley, PhD, Executive Director

Much has been written in the past few months—pro and con—about the results of the Facebook emotional contagion study published in June in the Proceedings of the National Academy of Sciences. The study manipulated the News Feeds of 700,000 unknowing Facebook users for a week in January 2012 by adjusting Facebook’s existing algorithm to over-select for either more positive or more negative language in posts. At the end of the week, the results showed that these users were more likely to follow the trend of their manipulated feed, that is, to use more positive or negative language in their own posts, respectively, based on their study grouping. Additionally, the study revealed that lowering the emotional content of posts overall caused users with affected News Feeds to post fewer words in their own statuses.

The public reaction to the revelation of the study in June was swift, loud, and dramatic. I myself was surprised by the uproar and still am not sure what to make of it.

Those who have written about the study in scholarly and popular media have voiced differing opinions about whether adequate informed consent for the study was provided via Facebook’s Terms of Service, as well as whether informed consent was even needed. Further debate has centered on whether the study required IRB review. And still other commentary has zeroed in on the merits of the research itself. As James Grimmelmann, a law professor from the University of Maryland said (quoted in The Atlantic, June 2014):
[The Facebook study] failed [to meet certain minimum standards]…for a particularly unappealing research goal: We wanted to see if we could make you feel bad without you noticing. We succeeded.
But are these the reasons users have been so incensed? I’m not sure.

Consider that, by their own admission, Facebook routinely manipulates its users’ News Feeds, filtering 1,500 possible news items down to 300 each time a user logs in. Many Facebook users object to this filtering (wanting instead to see everything and choose the content they engage with themselves), but that’s not enough to make the majority of account holders abandon or deactivate their accounts. The algorithm is also used to deliver related advertising content to users, and words in posts are parsed to target that advertising precisely to users’ recent activity: post enough about being on the treadmill, and your ads begin to feature running gear and related products. Yet again, no hue and cry, no mass exodus from Facebook by its billion plus worldwide users.

So it would seem that commercial audience manipulation—the basis for every marketing campaign the world over—is held to a lower standard than the presumably more noble and societally beneficial work of acquiring knowledge for the larger public good. Why is that?

The outcry about the study might be due to several factors: the perceived hubris of publishing a research paper about what perhaps should have remained internal commercial research; the fact that hundreds of thousands of Facebook users are left wondering if they were part of the experiment (as of this writing, there has been no indication that Facebook debriefed the subjects whose News Feed were affected); or the realization by those users and others that Facebook is able and willing to manipulate its user population in a variety of ways, and for purposes other than product enhancement or selling goods and services. In the words of Robinson Meyer (The Atlantic, June 2014):
And consider also that from this study alone Facebook knows at least one knob to tweak to get users to post more words on Facebook. [Author's emphasis] 
Perhaps we’re so accustomed to commercial manipulation that the instances that occur in our everyday lives—the placement of items on grocery store shelves, the tempo of music in shopping malls during the holidays, commercials for junk food peppered liberally through children’s television programming—don’t register as manipulative. Perhaps, too, we’re so used to them that we don’t even realize the effects they have on us. Some have suggested that the Facebook study and the public reaction to it should make us question our complacency about how our information is provided to and used by commercial entities. As Janet D. Stemwedel noted (Scientific American, June 2014):
Just because a company like Facebook may “routinely” engage in manipulation of a user’s environment, doesn’t make that kind of manipulation automatically ethical when it is done for the purposes of research. Nor does it mean that that kind of manipulation is ethical when Facebook does it for its own purposes. As it happens, peer-reviewed scientific journals, funding agencies, and other social structures tend to hold scientists building knowledge with human subjects to a higher ethical standard than (say) corporations are held to when they interact with humans. This doesn’t necessarily mean our ethical demands of scientific knowledge-builders are too high. Instead, it may mean our ethical demands of corporations are too low. [Author’s emphasis]
I think this is a point well taken. I also think there is an analogy to be drawn here to our collective attitudes about clinical care versus research. Consider the daily interaction between clinical care providers and patients. Patients trust doctors to make treatment decisions via prescriptions, referrals to specialists, and other interventions—some of which present more than minimal risk to a patient’s life or well-being. But not all doctors are equally knowledgeable, up-to-date on the current research, or without their own biases. And many of those decisions are made without any sort of consent process. It’s only when interventions—and sometimes the very same interventions, as in the case of comparative effectiveness research—are presented within the context of a research study that that the requirements for informed consent, and indeed an entire set of ethical questions and considerations, get triggered.

There are surely good reasons for this. Whether or not research is always inherently riskier to subjects than care is to patients—and I don’t believe it is—the very fact that one is participating in research, an enterprise whose goal is the creation of generalizable knowledge rather than personalized benefit, seems to me good reason for invoking a fairly robust ethical and regulatory machinery (though I acknowledge that the “machinery” we currently have in place may not be a good fit for much contemporary research). To make the parallel point to Professor Stemwedel’s, the fact that we seem to have different ethical standards or thresholds for research than for practice doesn’t, or doesn’t necessarily, mean that our standards for research are too high. Maybe it should, though, raise the question of whether our ethical standards for clinical practice are too low.

So, as with the Facebook case, I am left wondering, do we unfairly hold research to a higher ethical standard than we do clinical practice, or marketing practice?  And if so, are we, as some argue, thereby hindering important scientific progress?  Or does this highlight that we are we too lax about ethical considerations in other domains?  What do you think?

I invite and encourage you to join PRIM&R for a webinar on lessons learned from the Facebook study Thursday, October 30, at 1:00 PM ET. The Future of Internet Research: What We Can Learn from the Facebook Emotional Contagion Study, features Elizabeth Buchanan, PhDMary L. Gray, PhD, and Christian Sandvig, PhD, who will discuss the study and some of the commonly raised questions pertaining to internet and social media research, including: questions about how to classify social data; the ethical principles that accompany any such classification; how consent and debriefing strategies can be redesigned to scale up beyond the lab or field setting; how minimal risk can be assessed in online settings; and how to determine what constitutes publicly available information, communication, or social interaction versus private information, communication, or social interaction.