Chapter 4 Ethics

  • Distinguish between consequentialist, deontological, and virtue ethics frameworks
  • Identify key ethical issues in performing experimental research
  • Discuss ethical responsibilities in analysis and reporting of research
  • Describe ethical arguments for open science practices

The fundamental thesis of this book is that experiments are the way to estimate causal effects, which are the foundations of theory. And as we discussed in Chapter 1, the reason why experiments allow for strong causal inferences is because of two ingredients: a manipulation – in which the experimenter changes the world in some way – and randomization. Put a different way, experimenters learn about the world by randomly deciding to do things to their participants! Is that even allowed?

Experimental research raises a host of ethical issues that deserve consideration. What can and can’t we do to participants in an experiment, and what considerations do we owe to them by virtue of their decision to participate? To facilitate our discussion of these issues, we start by briefly introducing the standard philosophical frameworks for ethical analysis. We then use those to discuss problems of experimental ethics, first from the perspective of participants and then second from the perspective of the scientific ecosystem more broadly.40 We have placed this chapter near the beginning of our book because we think it’s critical to start the conversation about your ethical responsibilities as an experimentalist and researcher even before you start planning a study. We’ll come back to the ethical frameworks we describe here in Chapter 12, which deals specifically with participant recruitment and the informed consent process. We’ll also try to think about ethical issues when we discuss topics like allowable types of manipulation (Chapter 9), data sharing and privacy (Chapter 13), and publication ethics (Chapter 14).

Shock treatment

A decade after surviving prisoners were liberated from the last concentration camp, Adolf Eichmann, one of the Holocaust’s primary masterminds, was tried for his instrumental role in the mass genocide (Baade, 1961). While reflecting on his rationale for forcibly removing, torturing, and eventually murdering millions of Jews, an unrepentant Eichmann claimed that he was “merely a cog in the machinery that carried out the directives of the German Reich” and therefore was not directly responsible (Kilham & Mann, 1974). This startling admission gave a young researcher an interesting idea: “Could it be that Eichmann and his million accomplices in the Holocaust were just following orders? Could we call them all accomplices?” (Milgram, 1974).

Stanley Milgram aimed to make a direct test of whether people would comply under the direction of an authority figure no matter how uncomfortable or harmful the outcome. He invited participants into the laboratory to serve as a teacher for an activity (Milgram, 1963). Participants were told that they were to administer electric shocks of increasing voltage to another participant, the student, in a nearby room whenever the student provided an incorrect response. In reality, the student was a confederate who was in on the experiment and only pretended to be in pain when they received the shocks. Participants were encouraged to continue administering shocks despite clearly audible pleas from the student to stop the electric shocks. In one of Milgram’s studies, nearly 65% of participants administered the maximum voltage to the student. This deeply unsettling result has become, as L. Ross & Nisbett (2011) says, “part of our society’s shared intellectual legacy,” informing our scientific and popular conversation in myriad different ways.

Milgram’s study blatantly violates modern ethical norms around the conduct of research. Among other violations, the procedure involved coercion that violated participants’ right to withdraw from the experiment. This coercion appeared to have negative consequences: Milgram noted that a number of his participants displayed anxiety symptoms and nervousness.41 To be fair, Milgram also conducted a followup survey in which participants expressed gratitude for participating and did not note long-term negative effects (Milgram, 1974). This observation was distressing and led to calls for this sort of research to be declared unethical (e.g., Baumrind, 1964). The ethical issues surrounding Milgram’s study are complex, and some are relatively specific to the particulars of his study and moment (Miller, 2009). But the controversy around the study was an important part of convincing the scientific community to adopt stricter policies that protect study participants from unnecessary harm.

4.1 Ethical frameworks

Was Milgram’s experiment really ethically wrong – in the sense that it should not have been performed? You might have the intuition that is was unethical, due to the harms that the participants experienced or the way they were deceived by the experimenter. Others might consider arguments in defense of the experiment, perhaps that what we learned from the experiment was sufficiently valuable to justify its being conducted. Beyond simply arguing back and forth, how could we approach this issue more systematically?

Ethical frameworks offer tools for analyzing such situations. In this section, we’ll introduce three of the most commonly used frameworks: consequentialist, deontological, and virtue ethics. We’ll also discuss how each of these could be applied to Milgram’s paradigm.

4.1.1 Consequentialist theories

Ethical theories provide principles for what constitute good actions. The simplest theory of good actions is the consequentialist theory: good actions lead to good results. The most famous consequentialist position is the utilitarian position, originally defined by the philosopher John Stuart Mill (Flinders, 1992). This view emphasizes decision-making based on the “greatest happiness principle”, or the idea that an action should be considered morally good based on the degree of happiness or pleasure people experience because of it, and likewise that an action should be considered morally bad based on the degree of unhappiness or pain people experience by the same action (Mill, 1859).

A consequentialist analysis of Milgram’s study considers the study’s negative and positive effects and weighs these against one another. Did the study cause harm to its participants? If so, this should be counted against it. On the other hand, did the study lead to knowledge that prevented harm or caused positive benefits?

Consequentialist analysis can be a straightforward way to justify the risks and benefits of a particular action, but in the research setting it is unsatisfying. Many horrifying experiments would be licensed by a consequentialist analysis and yet feel untenable to us. Imagine a researcher forced you to undergo a risky and undesired medical intervention because the resulting knowledge might benefit thousands of others. This seems like the kind of thing our ethical framework should rule out!

4.1.2 Deontological approaches

Harmful research performed against participants’ will or without their knowledge is repugnant (we consider the Tuskegee Syphilis Experiment, a horrifying example of such research, below). Considering such cases makes us think about rules like “researchers must ask participants’ permission before conducting research on them.” Principles like this one are now formalized in all ethical codes for research. They exemplify an approach called deontological (or duty-based) ethics.

Deontology emphasizes the importance of taking ethically permissible actions, regardless of their outcome (Biagetti et al., 2020). In general, the IRB takes a deontological approach to ethics (Boser, 2007). In the context of research, there are four primary principles being applied:

  1. Respect for autonomy. This principle requires that people participating in research studies can make their own decisions about their participation, and that those with diminished autonomy (children, neuro-divergent people, etc.) should receive equal protections (Beauchamp et al., 2001). Respecting someone’s autonomy also means providing them with all the information they need to make an informed decision about whether to participate in a research study (giving consent) and giving them further context about the study they have participated in after it is done (debriefing).

  2. Beneficence. This principle means that researchers are obligated (and not simply suggested) to protect the well-being of participants for the duration of the study. Beneficence has two parts. The first is to do no harm. Researchers must take steps to minimize the risks to participants and to disclose any known risks at the onset. If risks are discovered during participation, researchers must notify participants of their discovery and make reasonable efforts to mitigate these risks, even if that means stopping the study altogether. The second is to maximize potential benefits.42 In practice, this doesn’t mean compensating participants with exorbitant amounts of money or gifts, which might cause other issues, like exerting an undue influence on low-income participants to participate. Instead “maximizing benefits” is interpreted as identifying all possible benefits of participation and making them available where possible.

  3. Nonmaleficence. This principle is similar to beneficence (in fact, beneficence and nonmaleficence were a single principle when they were first introduced in the Belmont Report, which we’ll discuss later) but differs in its emphasis on doing/causing no harm. But remember, deontology is about intent, not impact, so harm is sometimes warranted when the intent is morally good. For example, administering a vaccine may cause some discomfort and pain, but the intent is to protect the patient from developing a deadly virus in the future. The harm is justifiable under this framework.

  4. Justice. This principle means that both the benefits and risks of a study should be equally distributed among all participants. For example, participants should not be systematically assigned to one condition over another due to features they arrive to the study with, like socioeconomic status, race and ethnicity, or gender.

Analyzed from the perspective of these principles, Milgram’s study raises several flags. First, Milgram’s study reduced participants autonomy by making it difficult to voluntarily end their involvement (participants were told up to four times to continue administering shocks even after they expressed clear opposition). Further, Milgram’s study may have induced unnecessary harm by failing to screen participants for existing mental health issues before beginning the session.

Was Milgram justified?

Was the harm done in Milgram’s experiment justifiable given that it informed our understanding of obedience and conformity? We can’t say for sure. What we can say is that in the 10 years following the publication of Milgram’s study, the number of papers on (any kind of) obedience increased and the nature of these papers expanded from a focus on religious conformity to a broader interest in social conformity, suggesting that Milgram changed the direction of this research area. Additionally, in a followup that Milgram conducted, he reported that 84% of participants in the original study said they were happy to have been involved (Milgram, 1974).

Many scholars believe there was no ethical way to conduct Milgram’s experiment while also protecting the integrity of the research goals, but some have tried. One study recreated a portion of the original experiment, with some critical changes (Burger, 2007). Before enrolling in the study, participants completed both a phone screening for mental health concerns, addiction, or extreme trauma, and a formal interview with a licensed clinical psychologist, who identified signs of depression or anxiety. Those who passed these assessments were invited into the lab for a Milgram-type learning study. Experimenters clearly explained that participation was voluntary and the decision to participate could be reversed at any point, either by the participant themselves or by a trained clinical psychologist who was present for the duration of the session. Additionally, shock administration never exceeded 150 volts (compared to 450 volts in the original study), and experimenters debriefed participants extensively following the end of the session. One year later, no participants expressed any indication of stress or trauma associated with their involvement in the study.

4.1.3 Virtue-based Approaches

A final way that we can approach ethical dilemmas is through a virtue framework. A virtue is a trait, disposition, or quality that is thought to be a moral foundation (Annas, 2006). Virtue ethics suggests that people can learn to be virtuous by observing those actions in others they admire (Morris & Morris, 2016). Proponents of virtue ethics say this works for two reasons: (1) people are generally good at recognizing morally good traits in others and (2) people receive some fulfillment from living virtuously. Virtue ethics differs from deontology and utilitarianism because it focuses on the character of the actor rather than on the nature of the rule or the consequences of the action.

From a research perspective, virtue ethics tells us that in order to behave virtuously, we must make decisions that consider both the context and the situation surrounding the experiment (Dillern, 2021). In other words, researchers should evaluate how their studies might influence a participant’s behaviors, especially when those behaviors deviate from typical expectations. This process is also meant to be adaptive, meaning that researchers must be continually vigilant about both the changing mental states of their participants during the experimental session and whether the planned procedure is no longer acceptable.

How can we apply this ethical framework to Milgram’s experiment? Many virtue ethicists would probably conclude that Milgram’s approach was neither appropriate (for participants) nor adaptive. Upon noticing increasing levels of participant distress, an experimenter following this framework would likely have chosen to end the session early or seek to minimize distress from the beginning.

4.2 Ethical responsibilities to research participants

Milgram’s shock experiment was just one of dozens of unethical human subjects studies that garnered the attention and anger of the public in the United States. In 1978, the US National Commission for the Protection of Human Services of Biomedical and Behavioral Research released The Belmont Report, which described protections for the rights of human subjects participating in research studies (Adashi et al., 2018). Perhaps the most important message found in the Report was the notion that “investigators should not have sole responsibility for determining whether research involving human subjects fulfills ethical standards. Others, who are independent of the research, must share the responsibility.” In other words, ethical research requires both transparency and external oversight.

4.2.1 Institutional review boards

The creation of institutional review boards (IRBs) in the United States was an important result of the Belmont Report. While regulatory frameworks and standards vary across national boundaries, ethical review of research is ubiquitous across countries.43 In what follows, we focus on the US regulatory framework as it has been a model for other ethical review systems.

An IRB is a committee of people who review, evaluate, and monitor human subjects research to make sure that participants’ rights are protected when they participate in research (Oakes, 2002). IRBs are local; every organization that conducts human subjects or animal research is required to have its own IRB or to contract with an external one. If you are based at a university, yours likely has its own, and its members are probably a mix of scientists, doctors, professors, and community residents.

When a group of researchers have a research question they are interested in pursuing with human subjects, they must receive approval from their local IRB before beginning any data collection. The IRB reviews each study to make sure:

  1. A study poses no more than minimal risk to participants. This means the anticipated harm or discomfort to the participant is not greater than what would be experienced in everyday life.

  2. Researchers obtain informed consent from participants before collecting any data. This requirement means experimenters must disclose all potential risks and benefits so that participants can make an informed decision about whether or not to participate in the study. Importantly, informed consent does not stop after participants sign a consent form. If researchers discover any new potential risks or benefits along the way, they must disclose these discoveries to all participants (see Chapter 12).

  3. Sensitive information remains confidential. Although regulatory frameworks vary, researchers typically have an obligation to their participants to protect all identifying information (see Chapter 13).

  4. Participants are recruited equitably and without coercion. Before IRBs became standard, researchers often coercively recruited marginalized and vulnerable populations to test their research questions, rather than making participation in research studies voluntary and providing equitable access to the opportunity to participate.

The Tuskegee Syphilis Study

In 1929, The United States Public Health Service (USPHS) was perplexed by the effects of a particular disease in Macon County, Alabama, an area with an overwhelmingly Black population (Brandt, 1978). Syphilis is a sexually transmitted bacterial infection that can either be in a visible and active stage or in a latent stage. At the time of the study’s inception, roughly 36% of Tuskegee’s adult population had developed some form of syphilis, one of the highest infection rates in America (White, 2006).

The USPHS recruited 400 Black males from 25–60 years of age with latent syphilis and 200 Black males without the infection to serve as a control group to participate in what would become one of the most exploitative research studies ever done on American soil (Brandt, 1978). The USPHS sought the help of the Macon County Board of Health to recruit participants with the promise that they would provide treatment for community members with syphilis. The researchers sought poor, illiterate Blacks and, instead of telling them that they were being recruited for a research study, merely informed them that they would be treated for “bad blood”.

Because the study was interested in tracking the natural course of latent syphilis without any medical intervention, the USPHS had no intention of providing any care to its participants. To assuage participants, the USPHS distributed an ointment not been shown to be effective in the treatment of syphilis, and only small doses of a medication actually used to treat the infection. In addition, participants underwent a spinal tap which was presented to them as another form of therapy and their “last chance for free treatment.” By 1955, just over 30% of the original participants had died from syphilis complications.

It took until the 1970s before the final report was released and (the lack of) treatment ended. In total, 128 participants died of syphilis or complications from the infection, 40 wives became infected, and 19 children were born with the infection (Katz & Warren, 2011). The damage rippled through two generations, and many never actually learned what had been done to them. The Tuskegee experiment violates nearly every single guideline for research described above – indeed in its many horrifying violations of research participants’ agency, it provides a blueprint for future regulation to prevent any aspect of it from being repeated.

Investigators did not obtain informed consent. Participants were not made aware of all known risks and benefits involved with their participation. Instead, they were deceived by researchers who led them to believe that diagnostic and invasive exams were directly related to their treatment.

Participants were denied appropriate treatment following the discovery that penicillin was effective at treating syphilis (Mahoney et al., 1943). The USPHS requested that medical professionals overseeing their care outside of the research study not offer treatment to participants so as to preserve the study’s integrity. This intervention violated participants’ rights to equal access to care, which should have taken precedence over the results of the study.

Finally, recruitment was both imbalanced and coercive. Not only were participants selected from the poorest of neighborhoods in the hopes of finding vulnerable populations with little agency, but they were also bribed with empty promises of treatment and a monetary incentive (payment for burial fees, a financial obstacle for many sharecroppers and tenant farmers at the time).

4.2.2 Risks and benefits

Imagine that you were approached about participating in a research study at your local university. You were only told you would be paid $25 in exchange for completing an hour of cognitive tasks on a computer. Now imagine that halfway through the session, the experimenter revealed they would also need to collect a blood sample, “which should only take a couple of minutes and which will really help the research study.” Would you agree to the sample? Would you feel uncomfortable in any way?

Participants need to understand the risks and benefits of participation in an experiment before they give consent. To do otherwise compromises their autonomy (a key deontological principle). In the case of this hypothetical experiment, a new and unexpected invasive component of an experiment is coercive: participants would have to choose to forfeit their expected compensation to opt out. They also might feel that they have been deceived by the experimenter.

In human subjects research, deception is a specific technical term that refers to cases when (1) experimenters withhold any information about its goals or intentions, (2) experimenters hide their true identity (such as when using a confederate), (3) some aspects of the research are under- or overstated to conceal information, or (4) participants receive any false or misleading information. The use of deception requires special consideration from a human subjects perspective (Baumrind, 1985; Kelman, 2017)!

Even assuming they are disclosed properly without coercion or deception, the risks and benefits of a study must be assessed from the perspective of the participant, not the experimenter. By doing so, we allow participants to make an informed choice. In the case of the blood sample, the risks to the participant were not disclosed, and the benefits were stated in terms of the research project (and the experimenter). Neither of these allow the participant to weigh the decision based on their own values.

The benefits of participation in research can either be direct or indirect, and it is important to specify which type participants may receive. While some clinical studies and interventions may offer some direct benefit due to participation, many of the benefits of basic science research may be indirect. Both have their place in science, but participants must ultimately determine the degree to which each type of benefit motivates their own involvement in a study (Shatz, 1986).

4.3 Ethical responsibilities in analysis and reporting of research

What data?

Dutch social psychologist Diederick Stapel contributed to more than 200 articles on social comparison, stereotype threat, and discrimination, many published in the most prestigious journals. Stapel reported that affirming positive personal qualities buffered against dangerous social comparison, that product advertisements related to a person’s attractiveness changed their sense of self, and that exposure to intelligent in-group members boosted a person’s performance on future tasks (Gordijn & Stapel, 2012; Stapel & Linde, 2012; Trampe et al., 2011). These findings were fresh and noteworthy at the time of publication, and Stapel’s papers were cited thousands of times. The only problem? Stapel’s data were made up.

When Stapel first began fabricating data, he admitted to making small tweaks to a few points (Stapel, 2012). Changing a single number here and there would turn a flat study into an impressive one. Having achieved comfortable success (and having aroused little suspicion from journal editors and others in the scientific community), Stapel eventually began creating entire data sets and passing them off as his own. Several colleagues began to grow skeptical of his overwhelming success, however, and brought their concerns to the Psychology Department at Tilburg University. By the time the investigation of his work concluded, 58 of Stapel’s papers were retracted, meaning that the publishing journal withdrew the paper(s) after discovering that its contents were erroneous or invalid.

Everyone agrees that Stapel’s behavior was deeply unethical. But should we consider cases of falsification and fraud to be different in kind from other ethical violations in research? Or is it merely the endpoint in a continuum that might include other practices like p-hacking? Lawyers and philosophers grapple with the precise boundary between sloppiness and neglect, and it can be difficult to know which one is at play when a typo or coding mistake changes the conclusion of a scientific paper. Similarly, if a researcher engages in so-called “questionable research practices,” at what point should they be considered to have made an ethical violation as opposed to simply performing their research poorly? These are hard questions, but you should try to grapple with them, since these situations are not as rare as we would like.

As scientists, we not only have a responsibility to participants, we are also responsible for what we do with our data and for the kinds of conclusions we draw. Cases like Stapel’s seem stunning, but they are part of a continuum. Codes of professional ethics for organizations like the American Psychological Association enjoin researchers to take care in the management and analysis of their data so as to avoid errors and misstatements (Association, 2022).

Researchers also have an obligation not to suppress findings based on their own beliefs about the right answer. One unfortunate way that this suppression can happen is when researchers choose not to publish. Publication bias is the name given to the tendency to highlight or publish significant findings while ignoring findings that are non-significant (Rosenthal, 1979).44 Publication bias is sometimes called the “file drawer problem” because someone may choose to present the small subset of significant results while filing away other, non-significant findings that do not see the light of day. Researchers’ own biases can be another (invalid) rationale for not publishing: it’s also an ethical violation to suppress findings that contradict your theoretical commitments.

Importantly, researchers don’t have an obligation to publish everything they do. Publishing in the peer-reviewed literature is difficult and time-consuming. There are plenty of reasons not to publish an experimental finding! For example, there’s no reason to publish a result if you believe it is uninformative because of a small sample or a confound in the experimental design. You also aren’t committing an ethical violation if you decide to quit your job in research and so you don’t publish a study from your dissertation.45 Of course, if your dissertation contains the cure to a common and fatal disease, maybe the situation is different… The primary ethical issue arises when you use the result of a study – and how it relates to your own beliefs or to a threshold like \(p<.05\) – to decide whether to publish it or not.

As we’ll discuss again and again in this book, the preparation of research reports must also be done with care and attention to detail (see @ref(writing for details). Sloppiness in writing up results can lead to imprecise or over-broad claims; and if that sloppiness extends to the reporting of analyses and data, it may lead to irreproducibility as well.

Further, professional ethics dictate that published contributions to the literature be original. In general, the text of a paper must not be plagiarized (copied) from the text of other reports whether by you or by another author without attribution. Copying from others outside of a direct, attributed quotation is obviously an ethical violation because it leads to credit for text being given to you rather than the true author. But self-plagiarism within the text of journal articles is also not acceptable – it is a violation to receive credit multiple times for the same product.46 Though standards may differ from field to field, our sense is that the rule on self-plagiarism applies primarily to journal papers. Barring any specific policy of the funder or journal, it is acceptable to use text from a grant proposal that you wrote verbatim in a journal paper. It is also typically acceptable to use text from your own conference abstract submission in a journal paper.

4.4 Ethical responsibilities to the broader scientific community

The open science principles that we will describe throughout this book are not only important correctives to issues of reproducibility and replicability, they are also ethical duties.

The sociologist Robert Merton described a set of norms that science is assumed to follow: communism – that scientific knowledge belongs to the community; universalism – that the validity of scientific results is independent of the identity of the scientists; disinterestedness – that scientists and scientific institutions act for the benefit of the overall enterprise; and organized skepticism – that scientific findings must be critically evaluated prior to acceptance (Merton, 1979).

If the products of science aren’t open, it is very hard to be a scientist by Merton’s definition. To contribute to the communal good, papers need to be openly available. And to be subject to skeptical inquiry, experimental materials, research data, analytic code, and software must be all available so that analytic calculations can be verified and experiments can be reproduced. Otherwise, you have to accept arguments on authority rather than by virtue of the materials and data.

Openness is not only definitionally part of the scientific enterprise, it’s also good for science and individual scientists (Gorgolewski & Poldrack, 2016). Open access publications are cited more (Eysenbach, 2006; Gargouri et al., 2010). Open data also increases the potential for citation and reuse, and maximizes the chances that errors are found and corrected.

But these benefits means that researchers have a responsibility to their funders to pursue open practices so as to seek the maximal return on funders’ investments. And by the same logic, if research participants contribute their time to scientific projects, the researchers also owe it to these participants to maximize the impact of their contributions (Brakewood & Poldrack, 2013). For all of these reasons, individual scientists have a duty to be open – and scientific institutions have a duty to promote transparency in the science they support and publish.

But how should these duties be balanced against researchers’ other responsibilities? For example, how should we balance the benefit of data sharing against the commitment to preserve participant privacy? And, since transparency policies also carry costs in terms of time and effort, how should researchers consider those costs against other obligations?

First, open practices should be a default in cases where risks and costs are limited. For example, the vast majority of journals allow authors to post accepted manuscripts in their un-typeset form to an open repository. This route to “green” open access is easy, cost free, and – because it comes only after articles are accepted for publication – confers essentially no risks of scooping. As a second example, the vast majority of analytic code can be posted as an explicit record of exactly how analyses were conducted, even if posting data is sometimes more fraught. These kinds of “incentive compatible” actions towards openness can bring researchers much of the way to a fully transparent workflow, and there is no excuse not to take them.

Second, researchers should plan for sharing and build a workflow that decreases the costs of openness. As we discuss in Chapter 13, while it can be costly and difficult to share data after the fact if they were not explicitly prepared for sharing, good project management practices can make this process far simpler (and in many cases completely trivial).

Finally, given the ethical imperative towards openness, institutions like funders, journals, and societies need to use their role to promote open practices and to mitigate potential negatives. Scholarly societies have an important role to play in educating scientists about the benefits of openness and providing resources to steer their members towards best practices for sharing their publication and other research products. Similarly, journals can set good defaults, for example by requiring data and code sharing except in cases where a strong justification is given. Finally, funders of research can and do signal their interest in openness through data sharing mandates.

4.5 Chapter summary: Ethics

In this chapter, we discussed three ethical frameworks and evaluated how they can be applied to our own research through the lens of Milgram’s famous prison experiment. Studies like this prompted serious conversations about how best to reconcile experimenter goals with participant well-being. The publication of the Belmont Report and later creation of the IRB in the United States standardized the way scientists approach human subjects research, and sanctioned some much-needed accountability.

We have also addressed our responsibility to the scientific community, both in how we report our data and how we distribute it. We hope that we have convinced you that, aside from identifiable participant information, data should generally be widely accessible because it keeps science honest generates ideas for future research. Good research is ethical, and the best scientists are thinking about their impact from start to finish.

  1. The COVID-19 pandemic led to an immense amount of “rapid response” research in psychology that aimed to discover – and influence – the way people reasoned about contagion, vaccines, masking, and other aspects of the public health situation. What are the specific ethical concerns that researchers should be aware of for this type of research? Are there reasons for more caution in this kind of research than in other “run of the mill” research?
  2. Think of an argument against open science practices based on the material here and in Chapter 3. How would the three different ethical frameworks we discussed treat this issue?

References

Adashi, E. Y., Walters, L. B., & Menikoff, J. A. (2018). The belmont report at 40: Reckoning with time. American Journal of Public Health, 108(10), 1345–1348.
Annas, J. (2006). Virtue ethics. The Oxford Handbook of Ethical Theory, 515–536.
Association, A. P. (2022). Ethical principles of psychologists and code of conduct. https://www.apa.org/ethics/code
Baade, H. W. (1961). The eichmann trial: Some legal aspects. Duke LJ, 400.
Baumrind, D. (1964). Some thoughts on ethics of research: After reading milgram’s" behavioral study of obedience.". American Psychologist, 19(6), 421.
Baumrind, D. (1985). Research using intentional deception: Ethical issues revisited. American Psychologist, 40(2), 165.
Beauchamp, T. L., Childress, J. F.others. (2001). Principles of biomedical ethics. Oxford University Press, USA.
Biagetti, M. T., Gedutis, A., & Ma, L. (2020). Ethical theories in research evaluation: An exploratory approach. In Scholarly Assessment Reports (No. 1; Vol. 2).
Boser, S. (2007). Power, ethics, and the IRB: Dissonance over human participant review of participatory research. Qualitative Inquiry, 13(8), 1060–1074.
Brakewood, B., & Poldrack, R. A. (2013). The ethics of secondary data analysis: Considering the application of belmont principles to the sharing of neuroimaging data. Neuroimage, 82, 671–676.
Brandt, A. M. (1978). Racism and research: The case of the tuskegee syphilis study. Hastings Center Report, 21–29.
Burger, J. (2007). Replicating milgram. APS Observer, 20(11).
Dillern, T. (2021). The scientific judgment-making process from a virtue ethics perspective. Journal of Academic Ethics, 19(4), 501–516.
Eysenbach, G. (2006). Citation advantage of open access articles. PLoS Biology, 4(5), e157.
Flinders, D. J. (1992). In search of ethical guidance: Constructing a basis for dialogue. International Journal of Qualitative Studies in Education, 5(2), 101–115.
Gargouri, Y., Hajjem, C., Larivière, V., Gingras, Y., Carr, L., Brody, T., & Harnad, S. (2010). Self-selected or mandated, open access increases citation impact for higher quality research. PloS One, 5(10), e13636.
Gordijn, E. H., & Stapel, D. A. (2012). Behavioural effects of automatic interpersonal versus intergroup social comparison (retraction of vol 45, pg 717, 2006). BRITISH JOURNAL OF SOCIAL PSYCHOLOGY, 51(3), 498–498.
Gorgolewski, K. J., & Poldrack, R. A. (2016). A practical guide for improving transparency and reproducibility in neuroimaging research. PLOS Biology, 14(7), e1002506. https://doi.org/10.1371/journal.pbio.1002506
Katz, R. V., & Warren, R. C. (2011). The search for the legacy of the USPHS syphilis study at tuskegee. Lexington Books.
Kelman, H. C. (2017). Human use of human subjects: The problem of deception in social psychological experiments. In Research design (pp. 189–204). Routledge.
Kilham, W., & Mann, L. (1974). Level of destructive obedience as a function of transmitter and executant roles in the milgram obedience paradigm. Journal of Personality and Social Psychology, 29(5), 696.
Mahoney, J. F., Arnold, R., & Harris, A. (1943). Penicillin treatment of early syphilis—a preliminary report. American Journal of Public Health and the Nations Health, 33(12), 1387–1391.
Merton, R. K. (1979). The normative structure of science. The Sociology of Science: Theoretical and Empirical Investigations, 267–278.
Milgram, S. (1963). Behavioral study of obedience. The Journal of Abnormal and Social Psychology, 67(4), 371.
Milgram, S. (1974). Obedience to authority: An experimental view. Harper & Row.
Mill, J. S. (1859). Utilitarianism (1863). Utilitarianism, Liberty, Representative Government, 7–9.
Miller, A. G. (2009). Reflections on" replicating milgram"(burger, 2009).
Morris, M. C., & Morris, J. Z. (2016). The importance of virtue ethics in the IRB. Research Ethics, 12(4), 201–216.
Oakes, J. M. (2002). Risks and wrongs in social science research: An evaluator’s guide to the IRB. Evaluation Review, 26(5), 443–479.
Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychological Bulletin, 86(3), 638.
Ross, L., & Nisbett, R. E. (2011). The person and the situation: Perspectives of social psychology. Pinter & Martin Publishers.
Shatz, D. (1986). Autonomy, beneficence, and informed consent: Rethinking the connections. Cancer Investigation, 4(3), 257–269.
Stapel, D. A. (2012). Ontsporing. Prometheus Amsterdam.
Stapel, D. A., & Linde, L. A. van der. (2012). "What drives self-affirmation effects? On the importance of differentiating value affirmation and attribute affirmation": Retraction of stapel and van der linde (2011).
Trampe, D., Stapel, D. A., & Siero, F. W. (2011). Retracted: The self-activation effect of advertisements: Ads can affect whether and how consumers think about the self. Journal of Consumer Research, 37(6), 1030–1045.
White, R. M. (2006). Effects of untreated syphilis in the negro male, 1932 to 1972: A closure comes to the tuskegee study, 2004. Urology, 67(3), 654.