4 Ethics
- Distinguish between consequentialist, deontological, and virtue ethics frameworks
- Identify key ethical issues in performing experimental research
- Discuss ethical responsibilities in analysis and reporting of research
- Describe ethical arguments for open science practices
The fundamental thesis of this book is that experiments are the way to estimate causal effects, which are the foundations of theory. And as we discussed in chapter 1, the reason why experiments allow for strong causal inferences is because of two ingredients: a manipulation—in which the experimenter changes the world in some way—and randomization. Put a different way, experimenters learn about the world by randomly deciding to do things to their participants! Is that even allowed?1
1 We have placed this chapter in the “Foundations” part of the book because we think it’s critical to start the conversation about your ethical responsibilities as an experimentalist and researcher even before you start planning a study. We’ll come back to the ethical frameworks we describe here in chapter 12, which deals specifically with participant recruitment and the informed consent process.
Experimental research raises a host of ethical issues that deserve consideration. What can and can’t we do to participants in an experiment, and what considerations do we owe to them by virtue of their decision to participate? To facilitate our discussion of these issues, we start by briefly introducing the standard philosophical frameworks for ethical analysis. We then use those to discuss problems of experimental ethics, first from the perspective of participants and then second from the perspective of the scientific ecosystem more broadly. We end with an ethical argument for transparency.
Shock treatment
A decade after surviving prisoners were liberated from the last concentration camp, Adolf Eichmann, one of the Holocaust’s primary masterminds, was tried for his role in the mass genocide (Baade 1961). While reflecting on his rationale for forcibly removing, torturing, and eventually murdering millions of Jews, an unrepentant Eichmann claimed that he was “merely a cog in the machinery that carried out the directives of the German Reich” and therefore was not directly responsible (Kilham and Mann 1974). This startling admission gave a young researcher an interesting idea: “Could it be that Eichmann and his million accomplices in the Holocaust were just following orders? Could we call them all accomplices?” (Milgram 1974, 123).
Stanley Milgram aimed to make a direct test of whether people would comply under the direction of an authority figure no matter how uncomfortable or harmful the outcome. He invited participants into the laboratory to serve as a teacher for an activity (Milgram 1963). Participants were told that they were to administer electric shocks of increasing voltage to another participant, the student, in a nearby room whenever the student provided an incorrect response. In reality, there were no shocks, and the student was an actor who was in on the experiment and only pretended to be in pain when the “shocks” were administered. Participants were encouraged to continue administering shocks despite clearly audible pleas from the student to stop. In one of Milgram’s studies, nearly 65% of participants administered the maximum voltage to the student.
This deeply unsettling result has become, as Ross and Nisbett (1991, 55) say, “part of our society’s shared intellectual legacy,” informing our scientific and popular conversation in myriad different ways. At the same time, modern reanalyses of archival materials from the study have called into question whether the deception in the study was effective, casting doubt on its central findings (Perry et al. 2020).
Regardless of its scientific value, Milgram’s study blatantly violates modern ethical norms around the conduct of research. Among other violations, the procedure involved coercion that undermined participants’ right to withdraw from the experiment. This coercion appeared to have negative consequences: Milgram noted that a number of his participants displayed anxiety symptoms and nervousness. This observation was distressing and led to calls for this sort of research to be declared unethical (e.g., Baumrind 1964). The ethical issues surrounding Milgram’s study are complex, and some are relatively specific to the particulars of his study and moment (Miller 2009). But the controversy around the study was an important part of convincing the scientific community to adopt stricter policies that protect study participants from unnecessary harm.
4.1 Ethical frameworks
Was Milgram’s experiment (see case study) really ethically wrong—in the sense that it should not have been performed? You might have the intuition that is was unethical due to the harms that the participants experienced or the way they were (sometimes) deceived by the experimenter. Others might consider arguments in defense of the experiment, perhaps that what we learned from it was sufficiently valuable to justify its being conducted. Beyond simply arguing back and forth, how could we approach this issue more systematically?
Ethical frameworks offer tools for analyzing such situations. In this section, we’ll introduce three of the most commonly used frameworks and discuss how each of these could be applied to Milgram’s paradigm.
4.1.1 Consequentialist theories
Ethical theories provide principles for what constitutes good actions. The simplest theory of good actions is the consequentialist theory: good actions lead to good results. The most famous consequentialist position is the utilitarian position, originally defined by the philosopher John Stuart Mill (Flinders 1992). This view emphasizes decision-making based on the “greatest happiness principle,” or the idea that an action should be considered morally good based on the degree of happiness or pleasure people experience because of it, and likewise that an action should be considered morally bad based on the degree of unhappiness or pain people experience by the same action (Mill 1859).
A consequentialist analysis of Milgram’s study considers the study’s negative and positive effects and weighs these against one another. Did the study cause harm to its participants? On the other hand, did the study lead to knowledge that prevented harm or caused positive benefits?
Consequentialist analysis can be a straightforward way to justify the risks and benefits of a particular action, but in the research setting it is unsatisfying. Many horrifying experiments would be licensed by a consequentialist analysis and yet feel untenable to us. Imagine that a researcher forced you to undergo a risky and undesired medical intervention because the resulting knowledge might benefit thousands of others. This experiment seems like precisely the kind of thing our ethical framework should rule out!
4.1.2 Deontological approaches
Harmful research performed against participants’ will or without their knowledge is repugnant; we consider the Tuskegee Syphilis Experiment, a horrifying example of such research (see case study below). Considering such cases, a few rules seem obvious, for example: “Researchers must ask participants’ permission before conducting research on them.” Principles like this one are now formalized in all ethical codes for research. They exemplify an approach called deontological (or duty-based) ethics.
Deontology emphasizes the importance of taking ethically permissible actions, regardless of their outcome (Biagetti, Gedutis, and Ma 2020). In general, university ethics boards take a deontological approach to ethics (Boser 2007). In the context of research, there are four primary principles being applied:
Respect for autonomy. This principle requires that people participating in research studies can make their own decisions about their participation, and that those with diminished autonomy (e.g., children) should receive equal protections (Beauchamp and Childress 2001). Respecting someone’s autonomy also means providing them with all the information they need to make an informed decision about whether to participate in a research study (giving consent) and giving them further context about the study they have participated in after it is done (debriefing).
Beneficence. This principle means that researchers are obligated to protect the well-being of participants for the duration of the study. Beneficence has two parts. The first is to do no harm. Researchers must take steps to minimize the risks to participants and to disclose any known risks at the onset. If risks are discovered during participation, researchers must notify participants of their discovery and make reasonable efforts to mitigate these risks, even if that means stopping the study altogether. The second is to maximize potential benefits to participants.2
Nonmaleficence. This principle is similar to beneficence (in fact, beneficence and nonmaleficence were a single principle when they were first introduced in the Belmont Report, which we’ll discuss later) but differs in its emphasis on doing/causing no harm. In general, harm is bad—but deontology is about intent, not impact, so harm is sometimes warranted when the intent is morally good. For example, administering a vaccine may cause some discomfort and pain, but the intent is to protect the patient from developing a deadly virus in the future. The harm is justifiable under this framework.
Justice. This principle means that both the benefits and risks of a study should be equally distributed among all participants. For example, participants should not be systematically assigned to one condition over another based on features of their identity such as socioeconomic status, race and ethnicity, or gender.
2 In practice, this doesn’t mean compensating participants with exorbitant amounts of money or gifts, which might cause other issues, like exerting an undue influence on low-income participants to participate. Instead “maximizing benefits” is interpreted as identifying all possible benefits of participation in the research and making them available where possible.
Analyzed from the perspective of these principles, Milgram’s study raises several red flags. First, Milgram’s study reduced participants’ autonomy by making it difficult for them to voluntarily end their involvement (participants were told up to four times to continue administering shocks even after they expressed clear opposition). Second, the paradigm was designed in a way that it was likely to cause harm to its participants by putting them in a very stressful situation. Further, Milgram’s study may have induced unnecessary harm on certain participants by failing to screen participants for existing mental health issues before beginning the session.
Was Milgram justified?
Was the harm done in Milgram’s experiment justifiable given that it informed our understanding of obedience and conformity? We can’t say for sure. What we can say is that in the 10 years following the publication of Milgram’s study, the number of papers on obedience increased and the nature of these papers expanded from a focus on religious conformity to a broader interest in social conformity, suggesting that Milgram changed the direction of this research area. Additionally, in a follow-up that Milgram conducted, he reported that 84% of participants in the original study said they were happy to have been involved (Milgram 1974). On the other hand, given concerns about validity in the original study, perhaps its influence on the field was not warranted (Perry et al. 2020).
Many researchers believe there was no ethical way to conduct Milgram’s experiment while also protecting the integrity of the research goals, but some have tried. One study recreated a portion of the original experiment, with some critical changes (Burger 2009). Before enrolling in the study, participants completed both a phone screening for mental health concerns, addiction, or extreme trauma, and a formal interview with a licensed clinical psychologist, who identified signs of depression or anxiety. Those who passed these assessments were invited into the lab for a Milgram-type learning study. Experimenters clearly explained that participation was voluntary and the decision to participate could be reversed at any point, either by the participant themselves or by a trained clinical psychologist who was present for the duration of the session. Additionally, shock administration never exceeded 150 volts (compared to 450 volts in the original study) and experimenters debriefed participants extensively following the end of the session. This modified replication study found similar patterns of obedience as Milgram’s, and one year later, no participants expressed any indication of stress or trauma associated with their involvement in the study.
4.1.3 Virtue-based approaches
A final way that we can approach ethical dilemmas is through a virtue framework. A virtue is a trait, disposition, or quality that is thought to be a moral foundation (Annas 2006). Virtue ethics suggests that people can learn to be virtuous by observing those actions in others they admire (Morris and Morris 2016). Proponents of virtue ethics say this works for two reasons: (1) people are generally good at recognizing morally good traits in others and (2) people receive some fulfillment from living virtuously. Virtue ethics differs from deontology and utilitarianism because it focuses on a person’s character rather than on the nature of a rule or the consequences of an action.
From a research perspective, virtue ethics tells us that in order to behave virtuously, we must make decisions that consider the context surrounding the experiment (Dillern 2021). In other words, researchers should evaluate how their studies might influence a participant’s behaviors, especially when those behaviors deviate from typical expectations. This process is also meant to be adaptive, meaning that researchers must be vigilant about both the changing mental states of their participants during the experimental session and whether the planned procedure is no longer acceptable.
How can we apply this ethical framework to Milgram’s experiment? Many virtue ethicists would probably conclude that Milgram’s approach was neither appropriate (for participants) nor adaptive. Upon noticing increasing levels of participant distress, an experimenter following the virtue ethics framework should have chosen to end the session early or—even better—to have minimized participant distress from the beginning.
4.2 Ethical responsibilities to research participants
Milgram’s shock experiment was just one of dozens of unethical human subjects studies that garnered the attention and anger of the public in the United States. In 1978, the US National Commission for the Protection of Human Services of Biomedical and Behavioral Research released the Belmont Report, which described protections for the rights of human subjects participating in research studies (Adashi, Walters, and Menikoff 2018). Perhaps the most important message found in the report was the notion that “investigators should not have sole responsibility for determining whether research involving human subjects fulfills ethical standards. Others, who are independent of the research, must share the responsibility.” In other words, ethical research requires both transparency and external oversight.
4.2.1 Institutional review boards
The creation of institutional review boards (IRBs) in the United States was an important result of the Belmont Report. While regulatory frameworks and standards vary across national boundaries, ethical review of research is ubiquitous across countries. In what follows, we focus on the US regulatory framework as it’s been a model for other ethical review systems, but we use the clearer label “ethics review boards” for IRBs.
An ethics board is a committee of people who review, evaluate, and monitor human subjects research to make sure that participants’ rights are protected when they participate in research (Oakes 2002). Ethics boards are local; every organization that conducts human subjects or animal research is required to have its own ethics board or to contract with an external one. If you are based at a university, yours likely has its own, and its members are probably a mix of scientists, doctors, professors, and community residents.3
3 The local control of ethics boards can lead to very different practices in ethical review across institutions, which is obviously inconsistent with the idea that ethical standards should be uniform! In addition, critics have wondered about the structural issue that institutional ethics boards have an incentive to decrease liability for the institution, while private boards have an incentive to provide approvals to the researchers who pay them (Lemmens and Freedman 2000).
When a group of researchers have a research question they are interested in pursuing with human subjects, they must receive approval from their local ethics board before beginning any data collection. The ethics board reviews each study to make sure:
A study poses no more than minimal risk to participants. This means the anticipated harm or discomfort to the participant is not greater than what would be experienced in everyday life. It is possible to perform a study that poses greater than minimal risk, but it requires additional monitoring to detect any adverse events that may occur.
Researchers obtain informed consent from participants before collecting any data. This requirement means experimenters must disclose all potential risks and benefits so that participants can make an informed decision about whether or not to participate in the study. Importantly, informed consent does not stop after participants sign a consent form. If researchers discover any new potential risks or benefits along the way, they must disclose these discoveries to all participants (see chapter 12).
Sensitive information remains confidential. Although regulatory frameworks vary, researchers typically have an obligation to their participants to protect all identifying information recorded during the study (see chapter 13).
Participants are recruited equitably and without coercion. Before ethics boards became standard, researchers often coercively recruited marginalized and vulnerable populations to test their research questions, rather than making participation in research studies voluntary and providing equitable access to the opportunity to participate.
The Tuskegee Syphilis Study
In 1929, The United States Public Health Service (USPHS) was perplexed by the effects of syphilis in Macon County, Alabama, an area with an overwhelmingly Black population (Brandt 1978). Syphilis is a sexually transmitted bacterial infection that can either be in a visible and active stage or in a latent stage. At the time of the study’s inception, roughly 36% of Tuskegee’s adult population had developed some form of syphilis, one of the highest infection rates in America (White 2006).
The USPHS recruited 400 Black males from 25–60 years of age with latent syphilis and 200 Black males without the infection to serve as a control group to participate (Brandt 1978). The USPHS sought the help of the Macon County Board of Health to recruit participants with the promise that they would provide treatment for community members with syphilis. The researchers sought poor, illiterate Black people and, instead of telling them that they were being recruited for a research study, merely informed them that they would be treated for “bad blood.”
Because the study was interested in tracking the natural course of latent syphilis without any medical intervention, the USPHS had no intention of providing any care to its participants. To assuage participants, the USPHS distributed an ointment that had not been shown to be effective in the treatment of syphilis, and only small doses of a medication actually used to treat the infection. In addition, participants underwent a spinal tap, which was presented to them as another form of therapy and their “last chance for free treatment.”
By 1955, just over 30% of the original participants had died from syphilis complications. It took until the 1970s before the final report was released and (the lack of) treatment ended. In total, 128 participants died of syphilis or complications from the infection, 40 wives became infected, and 19 children were born with the infection (Katz and Warren 2011). The damage rippled through two generations, and many never actually learned what had been done to them.
The Tuskegee experiment violates nearly every single guideline for research described above—indeed in its many horrifying violations of research participants’ agency, it provides a blueprint for future regulation to prevent any aspect of it from being repeated: Investigators did not obtain informed consent. Participants were not made aware of all known risks and benefits involved with their participation. Instead, they were deceived by researchers who led them to believe that diagnostic and invasive exams were directly related to their treatment.
Perhaps most shocking, participants were denied appropriate treatment following the discovery that penicillin was effective at treating syphilis (Mahoney, Arnold, and Harris 1943). The USPHS requested that medical professionals overseeing their care outside of the research study not offer treatment to participants so as to preserve the study’s methodological integrity. This intervention violated participants’ rights to equal access to care, which should have taken precedence over the results of the study.
Finally, recruitment was both imbalanced and coercive. Not only were participants selected from the poorest of neighborhoods in the hopes of finding vulnerable populations with little agency but they were also bribed with empty promises of treatment and a monetary incentive (payment for burial fees, a financial obstacle for many sharecroppers and tenant farmers at the time).
4.2.2 Risks and benefits
Imagine that you were approached about participating in a research study at your local university. You were only told you would be paid $25 in exchange for completing an hour of cognitive tasks on a computer. Now imagine that halfway through the session, the experimenter revealed they would also need to collect a blood sample, “which should only take a couple of minutes and which will really help the research study.” Would you agree to the sample? Would you feel uncomfortable in any way?
Participants need to understand the risks and benefits of participation in an experiment before they give consent. To do otherwise compromises their autonomy (a key deontological principle). In the case of this hypothetical experiment, a new and unexpected invasive component of an experiment is coercive: participants would have to choose to forfeit their expected compensation to opt out. They also might feel that they have been deceived by the experimenter.
In human subjects research, deception is a specific technical term that refers to cases when (1) experimenters withhold any information about its goals or intentions, (2) experimenters hide their true identity (such as when using actors), (3) some aspects of the research are under- or overstated to conceal information, or (4) participants receive any false or misleading information. Use of deception requires special consideration from a human subjects perspective (Kelman 2017; Baumrind 1985).
Even assuming they are disclosed properly without coercion or deception, the risks and benefits of a study must be assessed from the perspective of the participant, not the experimenter. By doing so, we allow participants to make an informed choice. In the case of the blood sample, the risks to the participant were not disclosed and the benefits were stated in terms of the research project (and the experimenter).
The benefits of participation in research can either be direct or indirect, and it is important to specify which type participants may receive. While some clinical studies and interventions may offer some direct benefit due to participation, many of the benefits of basic science research are indirect. Both have their place in science, but participants must ultimately determine the degree to which each type of benefit motivates their own involvement in a study (Shatz 1986).
4.3 Ethical responsibilities in analysis and reporting
What data?
Dutch social psychologist Diederick Stapel contributed to more than 200 articles on social comparison, stereotype threat, and discrimination, many published in the most prestigious journals. Stapel reported that affirming positive personal qualities buffered against dangerous social comparison, that product advertisements related to a person’s attractiveness changed their sense of self, and that exposure to intelligent in-group members boosted a person’s performance on future tasks (Stapel and Linde 2012; Trampe, Stapel, and Siero 2011; Gordijn and Stapel 2006). These findings were fresh and noteworthy at the time of publication, and Stapel’s papers were cited thousands of times. The only problem? Stapel’s data were made up.
Stapel has admitted that when he first began fabricating data, he would make small tweaks to a few data points (Stapel 2012). Changing a single number here and there would turn a flat study into an impressive one. Having achieved comfortable success (and having aroused little suspicion from journal editors and others in the scientific community), Stapel eventually began creating entire data sets and passing them off as his own. Several colleagues began to grow skeptical of his overwhelming success, however, and brought their concerns to the Psychology Department at Tilburg University. By the time the investigation of his work concluded, 58 of Stapel’s papers were retracted, meaning that the publishing journal withdrew the paper after discovering that its contents were invalid.
Everyone agrees that Stapel’s behavior was deeply unethical. But should we consider cases of falsification and fraud to be different in kind from other ethical violations in research? Or is fraud merely the endpoint in a continuum that might include other practices like \(p\)-hacking? Lawyers and philosophers grapple with the precise boundary between sloppiness and neglect, and it can be difficult to know which one is at play when a typo or coding mistake changes the conclusion of a scientific paper. Similarly, if a researcher engages in so-called questionable research practices, at what point should they be considered to have made an ethical violation as opposed to simply performing their research poorly?
The ethical frameworks above provide a framework for thinking about this topic. For the consequentialist, sloppy science can lead to good outcomes for the scientist (quicker publication) but bad outcomes for the rest of the scientific community who have to waste time and effort on papers that may not be correct. For the deontologist, the scientist’s intention plays a key role: it is not a generally acceptable principle to knowingly use substandard practices. And for the virtue ethicist, sloppiness is not a morally good trait. On all analyses, researchers have a duty to pursue their work carefully.
As scientists, we not only have a responsibility to participants; we are also responsible for what we do with our data and for the kinds of conclusions we draw. Cases like Stapel’s (see the accident report above) seem stunning, but they are part of a continuum. Codes of professional ethics for organizations like the American Psychological Association encourage researchers to take care in the management and analysis of their data so as to avoid errors and misstatements (American Psychological Association 2017).
Researchers also have an obligation not to suppress findings based on their own beliefs about the right answer. One unfortunate way that this suppression can happen is when researchers selectively report their research, leading to publication bias, as you learned in chapter 3. Researchers’ own biases can be another (invalid) rationale for not publishing: it’s also an ethical violation to suppress findings that contradict your theoretical commitments.
Importantly, researchers don’t have an obligation to publish everything they do. Publishing in the peer-reviewed literature is difficult and time-consuming. There are plenty of reasons not to publish an experimental finding! For example, there’s no reason to publish a result if you believe it is truly uninformative because of a confound in the experimental design. You also aren’t typically committing an ethical violation if you decide to quit your job in research and so you don’t publish a study from your dissertation.4 The primary ethical issue arises when you use the result of a study—and how it relates to your own beliefs or to a threshold like \(p < 0.05\)—to decide whether to publish it or not.
4 On the other hand, if your dissertation contains the cure to a terrible disease, you do have a duty to publish it!
As we’ll discuss again and again in this book, the preparation of research reports must also be done with care and attention to detail (see chapter 14). Sloppiness in writing up results can lead to imprecise or over-broad claims; and if that sloppiness extends to the reporting of data, and analysis, it may lead to irreproducibility as well.
Further, professional ethics dictates that published contributions to the literature be original. In general, the text of a paper must not be plagiarized (copied) from the text of other reports whether by you or by another author without attribution. Copying from others outside of a direct, attributed quotation is obviously an ethical violation because it leads to credit for text being given to you rather than the true author. But self-plagiarism is also not acceptable—it is a violation to receive credit multiple times for the same product.5
5 Standards on this issue differ from field to field. Our sense is that the rule on self-plagiarism applies primarily to duplication of content between journal papers. So, for example, barring any specific policy of the funder or journal, it is acceptable to use text from one of your own grant proposals in a journal paper. It is also typically acceptable to reuse text from a conference abstract or preregistration (that you wrote, of course) when you prepare a journal paper.
4.4 Ethical responsibilities to the scientific community
The open science principles that we will describe throughout this book are not only important correctives to issues of reproducibility and replicability; they are also ethical duties.
The sociologist Robert Merton described a set of norms that science is assumed to follow: communism—that scientific knowledge belongs to the community; universalism—that the validity of scientific results is independent of the identity of the scientists; disinterestedness—that scientists and scientific institutions act for the benefit of the overall enterprise; and organized skepticism—that scientific findings must be critically evaluated (Merton 1979).
If the products of science aren’t open, it is very hard to be a scientist by Merton’s definition. To contribute to the communal good, papers need to be openly available. And to be subject to skeptical inquiry, experimental materials, research data, analytic code, and software must be all available so that analytic calculations can be verified and experiments can be reproduced. Otherwise, you have to accept arguments on authority rather than by virtue of the materials and data.
Openness is not only definitionally part of the scientific enterprise; it’s also good for science and individual scientists (Gorgolewski and Poldrack 2016). Publications that are open access are cited more (Eysenbach 2006; Gargouri et al. 2010). Open data also increases the potential for citation and reuse, and maximizes the chances that errors are found and corrected.
But these benefits mean that researchers have a responsibility to their funders to pursue open practices so as to seek the maximal return on funders’ investments. And by the same logic, if research participants contribute their time to scientific projects, the researchers also owe it to these participants to maximize the impact of their contributions (Brakewood and Poldrack 2013). For all of these reasons, individual scientists have a duty to be open—and scientific institutions have a duty to promote transparency in the science they support and publish.
How should these duties be balanced against researchers’ other responsibilities? For example, how should we balance the benefit of data sharing against the commitment to preserve participant privacy? And, since transparency policies also carry costs in terms of time and effort, how should researchers consider those costs against other obligations?
First, open practices should be a default in cases where risks and costs are limited. For example, the vast majority of journals allow authors to post accepted manuscripts in their un-typeset form to an open repository. This route to “green” open access is easy, cost free, and—because it comes only after articles are accepted for publication—confers essentially no risks of scooping. As a second example, the vast majority of analytic code can be posted as an explicit record of exactly how analyses were conducted, even if posting data is sometimes more complicated due to privacy restrictions. These kinds of “incentive-compatible” actions toward openness can bring researchers much of the way to a fully transparent workflow, and there is no excuse not to take them.
Second, researchers should plan for sharing and build a workflow that decreases the costs of openness. As we discuss in chapter 13, while it can be costly and difficult to share data after the fact if they were not explicitly prepared for sharing, good project management practices can make this process far simpler (and in many cases completely trivial).
Finally, given the ethical imperative toward openness, institutions like funders, journals, and societies need to use their role to promote open practices and to mitigate potential negatives (Nosek et al. 2015). Scholarly societies have an important role to play in educating scientists about the benefits of openness and providing resources to steer their members toward best practices for sharing their publication and other research products. Similarly, journals can set good defaults, for example by requiring data and code sharing except in cases where a strong justification is given. Funders of research can—and increasingly do—signal their interest in openness through data sharing mandates.
4.5 Chapter summary: Ethics
In this chapter, we discussed three ethical frameworks and evaluated how they can be applied to our own research through the lens of Milgram’s famous obedience experiment. Studies like Milgram’s prompted serious conversations about how best to reconcile experimenter goals with participant well-being. The publication of the Belmont Report and later creation of ethics boards in the United States standardized the way scientists approach human subjects research, and created much-needed accountability. We also addressed our ethical responsibilities to the scientific community, both in how we report our data and how we distribute it. We hope that we have convinced you that careful, open science is an ethical imperative for researchers!
- The COVID-19 pandemic led to an immense amount of “rapid-response” research in psychology that aimed to discover—and influence—the way people reasoned about contagion, vaccines, masking, and other aspects of public health. What are the specific ethical concerns that researchers should be aware of for this type of research? Are there reasons for more caution in this kind of research than in more “run-of-the-mill” research?
- Think of an argument against open science practices—for example, that following open science practices is especially burdensome for researchers with more limited resources (you can make up another if you want!). Given our argument that researchers have an ethical duty to openness, how would you analyze this argument under the three different ethical frameworks we discussed?
- The Belmont Report has shaped US research ethics policy from its publication to the present day. It’s also short and quite readable: https://www.hhs.gov/ohrp/regulations-and-policy/belmont-report/index.html.
- A rich reference with case studies on science misconduct and strong arguments for open science: Ritchie, Stuart. (2020). Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth. Metropolitan Books.