Chapter 14 Writing

  • Write clearly by being concise, using structure, and adjusting to your audience
  • Write reproducibly by interleaving writing and analysis code
  • Write responsibly by acknowledging limitations, correcting errors, and calibrating your conclusions

All of the effort you put into designing and running an effective experiment may be wasted if you cannot clearly communicate what you did. Writing is a powerful tool - though you contribute to the conversation only once, it enables you to speak to a potentially infinite number of readers. So it’s important to get it right! In this chapter, we’ll provide some guidance on how to write scientific papers – the primary method for reporting on experiments – clearly, reproducibly, and responsibly.222 Clarity of communication was a founding principle of modern science. Early proto-scientists conducting alchemical experiments often made their work deliberately obscure - even writing in cryptic codes - so that others could not discover the ‘powerful secrets of nature’. Pioneers of scientific methodology, like Francis Bacon and Robert Boyle, pushed instead for transparency and clarity. Notoriously, Issac Newton (originally an alchemist and later a scientist), continued to write in a deliberately obscure fashion in order to “protect” his work (Heard, 2016).

14.1 Writing clearly

What is the purpose of writing? “Telepathy, of course” says Stephen King (King, 2000). The goal of writing is to transfer information from your mind to the reader’s as effectively as possible. Unfortunately, for most of us, writing clearly does not come naturally; it is a craft we need to work at.

One of the most effective ways to learn to write clearly is to read and to imitate the writing you admire. Many scientific articles are not clearly written, so you will need to be selective in which models you imitate. Fortunately, as a reader, you will know good writing when you see it – you will feel like the writer is seamlessly transferring ideas from their mind to yours. When you come across writing like that, try to find more work by the same author. The more good scientific writing you are exposed to, the more you will develop a sense of what works and what does not. You may pick up bad habits as well as good ones (we sure have!), but over time, your writing will improve if you make a conscious effort to weed out the bad, and keep the good.

There are no strict rules of clear writing, but there are some generally accepted conventions that we will share with you here, drawing from both general style guides and those specific to scientific writing (Gernsbacher, 2018; Heard, 2016; Zinsser, 2006). ### The structure of a scientific paper

A scientific paper is not like a novel – rather than reading from beginning to end, readers typically jump between sections to efficiently extract the information most relevant to them (Doumont, 2009). This “random access” is possible because research articles typically follow the same conventional structure (see Figure 14.1). The main body of the article includes four main sections: Introduction, Methods, Results, and Discussion (IMRaD).223 In the old old days, there were few conventions – scientists would share their latest findings by writing letters to each other. But as the number of scientists and studies increased, this approach became unsustainable. The IMRaD structure gained traction in the 1800s and became dominant in the mid-1900s as scientific productivity rapidly expanded in the post-war era. We think IMRaD style articles are a big improvement, even if it is nice to receive a letter every now and again. This structure has a narrative logic: what’s the knowledge gap? (introduction); how did you address it (methods); what did you find (results); what do the results mean? (discussion).

Structure helps writers as well as readers. Try starting the writing process with section headings as a skeleton structure, then flesh it out, layer by layer. In each section, make a list of the key points you want to convey, each representing the first sentence of a new paragraph. Then add the content of each paragraph and you’ll be well on your way to having a full first draft of your article.

Imagine that the breadth of focus in the body of your article has an “hourglass” structure (Figure 14.1). The start of the introduction should have a broad focus, providing the reader with the general context of your study. From there, the focus of the introduction should get increasingly narrow until you are describing the specific knowledge gap or problem you will address and (briefly) how you are going to address it. The methods and results sections are at the center of the hourglass because they are tightly focused on your study alone. In the discussion section, the focus shifts in the opposite direction, from narrow to broad. Begin by summarizing the results of your study, discuss limitations, then integrate the findings with existing literature and describe practical and theoretical implications.

Conventional structure of a research article. The main body of the article consists of Introduction, Methods, Results, and Discussion (IMRaD) sections. Figure 14.1: Conventional structure of a research article. The main body of the article consists of Introduction, Methods, Results, and Discussion (IMRaD) sections.

Research articles are often packed with complex information; it is easy for readers to get lost. A “cross reference” is a helpful signpost that tells readers where they can find relevant additional information without disrupting the flow of the your writing. For example, you can refer the reader to data visualizations by cross referencing to figures or tables (e.g., “see Figure 1”), or additional methodological information in the supplementary information (e.g., “see Supplementary Information A”).

One important trick for structuring complex arguments is to cross reference your research aims/hypotheses with your results. These reference can serve to remind readers how different results or analyses relate back to your research goals. For example, you could introduce numbered hypotheses in the introduction of an article and then reference them with a set of analyses designed to address each in turn.

14.1.1 Paragraphs, sentences, and words

Writing an article is like drawing a human form. If you begin by sketching the clothes, you risk adding beautiful textures onto an impossible shape. Instead, you have to start by understanding the underlying skeleton and then gradually adding layers until you can visualize how cloth hangs on the body. The structure of an article is the “skeleton” and the paragraphs and sentences are the “flesh”. Only once you have a solid outline in place, should you start thinking about the paragraphs and sentences that will realize it.

Ideally, each paragraph should correspond to a single point in the article’s outline, with the specifics necessary to convince the reader embedded within. “P-E-E-L” (Point-Explain-Evidence-Link) is a useful paragraphing structure, particularly in the introduction and discussion sections. First, state the paragraph’s message succinctly in the first sentence (P). The core of the paragraph is dedicated to further explaining the point and providing evidence (E-E; you can also include a third ‘E’ — an example). At the end of the paragraph, take a couple of sentences to remind the reader of your point and set up a link to the next paragraph.

Since each sentence in a paragraph has a purpose, you can compose and edit the sentence by asking how its form serves its purpose. For example, short sentences are great for making strong initial points. On the other hand, if you only use short sentences your writing may come across as monotonous and robotic. Try varying the sentence length to give your writing a more natural rhythm. Just avoid trying to cram too much information into the same sentence; very long sentences can be confusing and difficult to process.

You can also use sentence structure as a scaffold to support the reader’s thinking. Start sentences with something the reader already knows. For example, rather than writing “We performed a between-subjects t-test comparing performance in the experimental and control groups to address the cognitive dissonance hypothesis”, write “To address the cognitive dissonance hypothesis, we compared performance in the experimental group and control group using a between-subjects t-test.”

Human readers are good at processing narratives about people. Yet often scientific papers compromise their narrative by removing themselves from their research, sometimes even using awkward grammatical constructions to do so. For example, scientists sometimes write “the data were analysed” or, worse, “an analysis of the data was carried out.” Many of us were taught to write sentences like these. But isn’t it clearer to say “We analyzed the data”?

Similarly, many of us tend to hide our views with frames and caveats: “[It is believed that/Research indicates that/Studies show that] money leads to increased happiness (Frog & Toad, 1963).” If you truly do believe that money causes happiness, you should simply assert it, with a citation if necessary. Save the caveats for cases where someone believes that money causes happiness, but it’s not you. Emphasize uncertainty where you in fact feel that uncertainty is warranted and readers will take your doubts more seriously.

14.2 Advice

Scientific writing has a reputation for being dry, dull, and soulless. While it’s true that writing psychology articles is more constrained than writing fiction, there are still ways to surprise and entertain your reader with metaphor, alliteration, and even humor. As long as your writing is clear and accurate, we see no reason why you cannot also make it enjoyable. Enjoyable articles are easier to read and more fun to write.224 One of our favorite examples of an enjoyable article is Cutler (1994), a delightful piece that uses the form of the article to make a point about human language processing. Read it: you’ll see!

Here are a few more pieces of advice about expressing yourself clearly:

Be explicit. Avoid vagueness and ambiguity. The more you leave the meaning of your writing to your reader’s imagination the greater the danger that different readers will imagine different things! So be direct and specific.

Be concise. Maximize the signal to noise ratio in your writing by omitting needless words and removing clutter (Zinsser, 2006). For example, say we investigated rather than we performed an investigation of and say if rather than in the event that. Don’t try to convey everything you know about a topic – a research report is not an essay. Include only what you need to achieve the purpose of the article and exclude everything else.

Be concrete. Concrete examples make abstract ideas easier to grasp. But some ideas are just hard to express in prose. Diagrams can be very helpful in these cases. For example, it may be clearer to illustrate a complex series of exclusion criteria using a flow chart rather than text. You can even use videos and screen capture software – for exasmple, to demonstrate experimental tasks or researcher interactions with participants (Heycke & Spitzer, 2019).

Be consistent. Referring to the same concept using different words can be confusing. It may not be clear if you are using a synonym or referring to a different idea. For example, in everyday conversation, “replication” and “reproducibility” may sound like two different ways to refer to the same thing, but in scientific writing, these two concepts have different technical definitions, so we should not use them interchangeably. Define each technical term once and then use the same term throughout the manuscript.

Adjust to your audience. Most of us adjust our conversation style depending on who we’re talking to; the same principle applies to good writing. Knowing your audience is more difficult with writing, because we cannot see the reader’s reactions and adjust accordingly. Nevertheless, we can make some educated guesses about who our readers might be. For example, if you are writing an introductory review article, you may need to pay more attention to explaining technical terms compared with writing a research article for a specialty journal.

Check your understanding. Unclear writing can be a symptom of unclear thinking. If an idea doesn’t make sense in your head, how will it ever make sense on the page? In fact, trying to communicate something in writing is an excellent way to probe your understanding and expose logical gaps in your arguments. So if you are finding it difficult to write clearly, stop and ask yourself do I know what want to say? If the problem is unclear thinking, then you need to address that first, for example by consulting a textbook or colleague/advisor.

Use acronyms sparingly. It’s tempting to replace lengthy terminology with short acronyms — why say “cognitive dissonance theory” when you can say “CDT”? Unfortunately, acronyms can increase the reader’s cognitive burden and cause misunderstandings.225 A. Barnett & Doubleday (2020) found that acronyms are widely used in research articles and argued that they undermine clear communication. Here is one example of text Barnett and Doubleday extracted from a 2019 publication to illustrate the point: “Applying PROBAST showed that ADO, B-AE-D, B-AE-D-C, extended ADO, updated ADO, updated BODE, and a model developed by Bertens et al. were derived in studies assessed as being at low risk of bias.” For example, if you shorten “odds ratio” to “OR”, the reader has to take the extra step of translating “OR” back to “odds ratio” every time they encounter it. The problem multiplies as you introduce more acronyms into your article. Worse, for some readers, “OR” tends to mean “operating room”, not “odds ratio.” Acronyms can be useful, but usually only when they are widely used and understood.

14.2.1 Drafting and revision

The clearest and most effortless-seeming scientific writing has probably gone through extensive revision to appear that way. It can surprise many students to know the amount of revision that has gone into many “breezy” articles. For example, Tversky and Kahneman famously drafted and re-drafted each word of their famous articles on judgment and decision-making, hunched over the typewriter together (M. Lewis, 2016).

Think of the article you are writing as a garden. Your first draft may be an unruly mess of intertwined fronds and branches. Several rounds of pruning and sculpting will be needed before your writing reaches its most effective form. You’ll be amazed how often you find words to omit, terms you can define more precisely, or elaborate sentence structures you can simplify.

It can be difficult to judge if your own writing has achieved its telepathic goal, however, especially after several rounds of revision. If possible, try to get feedback from somebody in your target audience. Their feedback – even if not wholly positive – will give you a good sense of how much of your argument they understood (and agreed with).226 Seek out people who are willing to tell you that your writing is not good! They may not make you feel good, but they will help you improve.

14.3 Writing reproducibly

Many research results are not reproducible -— that is, the numbers and graphs that they report can’t be recreated by repeating the original analyses – even on the original data. As we discussed in Chapter 3, a lack of reproducibility is a big problem for the scientific literature; if you can’t trust the numbers in the articles you read, it’s much harder to build on the literature.

Fortunately, there are number of tools and techniques available that you can use to write fully reproducible research reports. The basic idea is to create an unbroken chain that links every single part of the data analysis pipeline, from the raw data through to the final numbers reported in your research article. This linkage enables you – and hopefully others as well – to trace the provenance of every number and recreate (reproduce) it from scratch.

14.3.1 Why write reproducible reports?

There are at least three reasons to write reproducible reports. First, data analysis is an error-prone activity. Without safeguards in place, it can be easy to accidentally overwrite data, mislabel experimental conditions, or copy and paste the wrong statistics. One study found that nearly half of around 30,000 published psychology papers contained statistical reporting errors; 10% of reported p-values were inconsistent with other reported details of the statistical test, and 1.6% were “grossly” inconsistent (the difference between the p-value and the test statistic meant that one value implied statistical significance and the other did not) (Nuijten et al., 2016). You can reduce opportunities for error by adopting a reproducible analysis workflow that avoids error-prone manual actions, like copying and pasting.

Second, technical information about data analysis can be difficult to communicate in writing. Prose is often ambiguous and authors can inadvertently leave out important details (Hardwicke et al., 2018). By contrast, a reproducible workflow documents the entire analysis pipeline from raw data to research report exactly as it was implemented (Figure ??), describing the origin of any reported values and allowing readers to assess, verify, and repeat the analysis process.

Finally, reproducible workflows are typically more efficient workflows. For example, you may realize you forgot to perform data exclusions and need to rerun the analysis. You may produce a graph and then decide you’d prefer a different color scheme. Or perhaps you want to output the same results table in a PDF document and in a PowerPoint slide. In a reproducible workflow, all of the analysis steps are scripted, and can be easily re-run at the click of a button. You (and others) can also re-use parts of your code in other projects, rather than having to re-write everything from scratch.

14.3.2 Principles of reproducible writing

Below we outline some general principles of reproducible writing. These can be put in practice in a number of different software ecosystems. We recommend RMarkdown, a way of writing data analysis code in R so that it compiles into spiffy documents or even websites. (This book was written in RMarkdown). Appendix B gives an introduction to the nuts and bolts of using RMarkdown to create scientific papers.

  • Never break the chain. Every step of the analysis pipeline should be linked together programmatically (i.e., by computer code). This allows everything to be re-run from scratch without any requiring any manual actions.

  • Script everything. Try to ensure that each step of the analysis pipeline is executed by computer code rather than manual actions, like copying and pasting or directly editing spreadsheets. This ensures that every step is documented in its most literal form, ensuring it can be reproduced. Imagine, for example, that you decided to re-code a variable in your dataset. You could use the “find and replace” function in Excel, but this action would not be documented – you might even forget that you did it! A better option would be to write an R script.

  • Use literate programming. The meaning of a chunk of computer code is not always obvious to another user, especially if they’re not an expert. Indeed, we frequently look at our own code and scratch our heads, wondering what on earth it’s doing. To avoid this problem, try to structure your code around plain language comments that explain what it should be doing, a technique known as “literate programming” (Knuth, 1992).

  • Use defensive programming. Errors can still occur in scripted analyses. Defensive programming is a series of strategies to help anticipate, detect, and avoid errors in advance. A typical defensive programming tool is the inclusion of tests in your code. For example, you might test if a variable storing reaction times has taken on values below zero (which should be impossible). If the test passes, the analysis pipeline continues; if the test fails, the pipeline halts and an error message appears to alert you to the problem.

  • Use free/open-source software and programming languages. If possible, avoid using commercial software, like SPSS or Matlab, and instead use free, open-source software and programming languages, like JASP, Jamovi, R, or Python. This practice will make it easier for others to access, reuse, and verify your work – including yourself!227 Several of us have libraries of old Matlab code; it’s terrible to have to ask yourself whether it’s worth the price of another year’s license in order to check an analysis.

  • Use version control. In Chapter 13, we introduced the benefits of version control – a great way to save your analysis pipeline incrementally as you build it, allowing you to roll-back to previous version if you accidentally introduce errors.

  • Preserve the computational environment. Even if your analysis pipeline is entirely reproducible on your own computer, you still need to consider whether it will run on somebody else’s computer, or even your own computer after you have updated your software. You can address this issue by documenting and preserving the computational environment in which the analysis pipeline runs successfully. Various tools are available to help with this, including Code Ocean, renv (for R), and pip (for Python).

14.3.3 The reproducibility-collaboration trade-off

We would love to leave it here and watch you walk off into the sunset with a spring in your step and a reproducible report under your arm. Unfortunately, we have to admit that writing reproducibly can create a few practical difficulties when it comes to collaboration. A major aspect of collaboration is exchanging comments and inline text edits with co-authors. You can do this exchange with R Markdown files and Git, but these tools are not as user-friendly as, say, Word or Google Docs, and some collaborators will be completely unfamiliar with them. Most journals also expect articles to be submitted as Word documents. Outputting R Markdown files to Word can often introduce formatting issues, especially for moderately complex tables. So until more user-friendly tools are introduced, some compromise between reproducibility and collaboration may be necessary. Here are two workflow styles for you to consider.

First, the maximal reproducibility approach. If your collaborators are familiar with R Markdown and you don’t mind exchanging comments and edits via Git, then you can maintain a fully reproducible workflow for your project, at least up until you release a preprint of your work. The journal submission and publication process may still introduce some issues such as incorporating changes made by the copy editor, but at least your submitted manuscript (and the preprint you have hopefully posted) will be fully reproducible.

Second, the two worlds approach. This workflow is a bit clunky, but it facilitates collaboration and maintains reproducibility. First, write your results section in R Markdown and generate a Word document. Then, write the remainder of the manuscript in Word, including incorporating comments and changes from collaborators. When you have a final version, copy and paste the abstract, introduction, methods, and discussion into the R Markdown document.228 You can also incorporate Google Docs into this workflow – we find that cloud platforms like Docs are especially useful when gathering comments from multiple collaborators on the same document. Unfortunately, you cannot generate a Google Doc from R Markdown, so you will need to copy and paste. Integrating any changes made to the results section back into the R Markdown requires a bit more effort, either using manual checking or Word’s “compare documents” feature. The advantage of this approach is that you have a reproducible document and your collaborators have not had to deviate from their preferred workflow. Unfortunately, it requires more effort from you and is slightly more error-prone than the first method.

14.4 Writing responsibly

As a scientific writer, you have both professional and ethcial responsibilities. You must communicate all relevant information about your research so as to enable proper evaluation and verification by other scientists. It is also important not to overstate your findings and carefully calibrate your conclusions to the available evidence (Hoekstra & Vazire, 2020). If any errors in your work arise, you must respond appropriately and correct them where appropriate. Finally, you must meet scholarly obligations with regards to authorship and citation practices.

14.4.1 Responsible disclosure and interpretation

Back in school, we all learned that getting the right answer is not enough, you need to demonstrate how you arrived at that answer in order to get full marks. The same expectation applies to research reports. Don’t just tell the reader what you found, tell them how you found it.229 It can be easy to overlook important details, especially when you reach the end of a project. Looking back at your study preregistration can be a helpful reminder. Reporting guidelines for different research designs can also providfe useful checklists (Appelbaum et al., 2018). That means describing the methods in full detail, including providing the data, materials, and analysis scripts.

In a journal article, you typically have some flexibility in terms of how much detail you provide in the main body of the article and how much you relegate to the supplementary information. Readers have different needs; some may just want to know the highlights, and some will need detailed methodological information in order to replicate your study. As a rule of thumb, try to make sure there is nothing relegated to the supplementary information that might surprise the reader. You certainty should not use the supplementary information to hide important details deliberately or use it as a disorganized dumping ground – the principles of clear writing still apply!

Here are a few more guidelines for responsible writing:

  • Acknowledge limitations. No study is perfect. Even the most rigorously designed research will have limitations, simply because doing science is hard!230 Though some limitations can be avoided, most are an inherent part of doing research rather than a failing by the researcher. A limitation of a car is that it cannot fly – we do not blame the manufacturer for this, but we do expect them to be honest about their cars being flightless. For example, if your sample consisted only of university students or you only used two different stimuli, the study may have limited generalizability. Think carefully about the limitations of your study and state these limitations clearly in the discussion section.

  • Don’t overclaim. Unfortunately, scientists often feel (and often are) evaluated based on the results of their research, rather than the quality of the research. Consequently, it can be tempting to make bigger and bolder claims about your research than are really justified by the evidence. Think carefully about the limitations of your research and calibrate your conclusions to the evidence your have obtained, rather than what you wish you were able to claim. Ensure that your conclusions are appropriately stated throughout the manuscript, especially the title and abstract.

  • Discuss, don’t debate. The purpose of the discussion section is to help the reader interpret your research. Importantly, it is not a debate – don’t feel the need to argue dogmatically for a particular position or interpretation. You should discuss the strengths and weaknesses of the evidence, and the relative merits of different interpretations. For example, perhaps there is a potential confounding variable that you were unable to eliminate with your research design. The reader might be able to spot this themselves, but regardless, its your responsibility to highlight it. Perhaps on balance you think the confound is unlikely to explain the results - that’s fine, but you need to explain your reasoning to the reader.

  • Disclose conflicts of interest and funding. Researchers are usually personally invested in the outcomes of their research and this investment can lead to bias (for example, overclaiming or selective reporting). But sometimes the potential personal gains for a researcher rise above a particular threshold and are considered conflicts of interest. Where the threshold lies is not always completely clear. The most obvious conflicts of interest occur when you stand to benefit financially from the outcomes of your research (for example a pharmaceutical company evaluating their own drug). If in doubt, disclose it. You should also disclose any funding you received for the research, partly because this is often a requirement of the funder, and partly because it may represent a conflict of interest. To avoid ambiguity, you should also disclose when you do not have a conflict of interest or funding to declare.

  • Report transparently. In Chapter 11, you learned about the problem of selective reporting and how it can bias research results and conclusions. There are several ways to avoid this issue in your own work. First, assuming you have reported everything, include a statement in the methods section that explicitly says so. A statement suggested by Simmons et al. (2012) is “We report how we determined our sample size, all data exclusions (if any), all manipulations, and all measures in the study.” If you have preregistered your study, clearly link to the preregistration and state whether you deviated from your original plan. You can include a detailed preregistration disclosure table in the supplementary information and highlight any major deviations in the methods section. In the results section, clearly identify (e.g., with sub-headings) which analyses were pre-planned and included in the preregistration (confirmatory) and which were not planned (exploratory).

14.4.2 Responsible handling of errors

It is not your responsibility to never make mistakes. But it is your responsibility to respond to errors in a timely, transparent, and professional manner (Bishop, 2018).231 As jazz musician Miles Davis once said, “If you hit a wrong note, it’s the next note that you play that determines if it’s good or bad.” Regardless of how the error was identified (e.g., by yourself or by a reader), we recommend contacting the journal and requesting that they publish a correction statement (sometimes called an erratum). Several of us have corrected papers in the past. If the error is serious and cannot be fixed, you should consider retracting the article.

A correction/retraction statement should include the following information:

  1. Acknowledge the error. Be clear that an error has occurred.
  2. Describe the error. Readers need to know the exact nature of the error.
  3. Describe the implications of the error. Readers need to know how the error might affect their interpretation of the results.
  4. Describe how the error occurred. Knowing how the error happened may help others avoid the same error.
  5. Describe what you have done to address the error. Others may learn from solutions you’ve implemented.
  6. Acknowledge the person who identified the error. Identifying errors can require a lot of work; if the person is willing to be identified, give credit where credit is due.

In 2018, at a crucial stage of her career, Dr Julia Strand published an important study in the prestigious journal Psychonomic Bulletin & Review. She presented the work at conferences and received additional funding to do follow-up studies. But several months later, her team found that they could not replicate the result.

Puzzled, Julia began searching for the cause of the discrepant results. Eventually, she found the culprit – a programming error. As she sat staring at her computer in horror, she realized that it was unlikely anyone else would ever find the bug. Hiding the error must have seemed like the easiest thing to do.

But Julia did the right thing. She spent the next day informing her students, her co-authors, the funding officer, the department chair overseeing her tenure review, and the journal – to initiate a retraction of the article. And… it didn’t ruin her career. Everybody was understanding and appreciated that she was doing the right thing. The journal corrected the article. She didn’t lose her grant. She got tenure. And a lot of scientists, including us, admire Julia for what she did. Honest mistakes happen – it’s how you respond to them that matters (Strand, 2021).

14.4.3 Responsible citation

When you build upon prior work, you must cite it. This practice ensures that the authors of prior work receive credit for their contribution and allows readers to verify the basis of your claims. Try to be explicit about why you are citing a source, for example, does it provide evidence to support your point? Is it a review paper? Or is it a description of a theory you are testing? You should certainly avoid copying the work of others and presenting as your own (see Chapter 4 for more on plagiarism).

Make sure you read articles before you cite them. Stang et al. (2018) reports a cautionary tale. One of the authors had previously published a commentary criticizing a methodological tool (the Newcastle–Ottawa scale) designed to assess the quality of non-randomized trials. The commentary was highly cited – in fact, it received more citations than the article that originally introduced the tool. Stang et al. (2018) examined a random sample of 96 articles that cited the critical commentary. Despite the fact that the commentary recommended that the tool should not be used 94 of the 96 articles cited the commentary in support of the scale! It seems that many authors had not read the paper they were citing.

Try to avoid selective or uncritical citation. Only citing prior work that supports your argument is misleading. You should provide a balanced account of prior work, including contradictory evidence. Make sure to evaluate and integrate evidence, rather than simply listing studies. Remember – every study has limitations.

14.4.4 Responsible authorship practices

It is an ethical responsibility to credit the individuals who worked on a research project – both so that they can reap the benefits if the work is influential, but also so that they can take responsibility for errrors or misuses.232 In 1975, physicist and mathematician Jack H. Hetherington wrote a paper he intended to submit to the journal Physical Review Letters. We’re not sure why, but Hetherington wrote the paper in first person plural (i.e., referring to himself as “we”). He subsequently discovered that the journal would not accept this for single-authored articles. Hetherington had painstakingly tapped out the article on his typewriter, an exercise he was not keen to repeat. Instead, he opted for a less taxing solution and named his cat -— a feline by the name of F.D.C Willard -— as a coauthor. The paper was accepted and published (Hetherington & Willard, 1975).

Currently in academia, the authorship model is dominant. Under this model, authorship and authorship order are important signals about researchers contributions to a project. It is generally expected that to qualify for authorship, an individual should have made a substantial contribution to the research (e.g., design, data collection, analysis), assists with writing the research report, and takes joint responsibility for the research along with the other co-authors. Individuals who worked on the project who do not reach this threshold are instead mentioned in a separate acknowledgements section and not considered authors.

Authorship order is often understood to signal the nature and extent of an authors contribution. In psychology (and neighboring disciplines), the first author and last author are typically the project leaders. The first author is often a junior colleague who implements the project and the last author is often a senior colleague who supervises the project.

It has been argued that the authorship model should be replaced with a more inclusive contributorship model in which all individuals who worked on the project are acknowledged as ‘contributors’. Unlike the authorship model, there is no arbitrary threshold for contributorship. The actual contributions of each individual are explicitly described, rather than relying on the implicit conventions of authorship order. The contributorship model may facilitate collaboration and ensure student assistants are properly credited. You will probably find that most journals still expect you to use the authorship model. It is usually possible – and sometimes required – to include a contributorship statement in your article that describes what everybody did.233 The CREDIT taxonomy provides a structured taxonomy of research tasks, making for uniform contributorship reporting. The tool Tenzing then allows for CREDIT statements to be generated automatically from standardized forms (Holcombe et al., 2020).

Because authorship is such an important signal in academia, it’s important to agree on an authorship plan with your collaborators (particularly who will be the first and last authors) as early as possible.234 If you have find yourself in a situation where all authors have contributed equally, you may have to draw inspiration from historical examples and determine authorship order based on a 25 game croquet series (Hassell & May, 1974), rock, paper, scissors (Kupfer et al., 2004), or a brownie bake-off (H. J. Young & Young, 1992). Alternatively, you can adopt the method of Lakens et al. (2018) and randomize the order in R – the authors even share their code!

14.5 Chapter summary: Writing

Writing a scientific article can be a rewarding endpoint for the process of doing experimental research. But writing is a craft, and writing clearly – especially about complex and technical topics – can require substantial practice and many drafts. Further, writing about research comes with ethical and professional responsibilities that are different than the burdens of other kinds of writing. A scientific author must work to ensure the reproducibility of their findings. Further, they must report on those findings responsibly, noting limitations and weaknesses as well as strengths.

  1. Find a writing buddy and exchange feedback on a short piece of writing (the abstract of a paper in progess, a conference abstract, or even a class project proposal would be good examples. Think consciously about how to improve each other’s writing using the advice offered in this chapter.

  2. Identify a published research article with openly available data and see if you can reproduce an analysis in their paper by recovering exactly the numerical values they report. You can find support for this exercise at the Social Science Reproduction Platform (https://www.socialsciencereproduction.org) or ReproHack (https://www.reprohack.org).

  • Zinsser, W. (2006). On writing well: The classic guide to writing nonfiction [7th ed]. Harper Collins.

  • Gernsbacher, M. A. (2018). Writing empirical articles: Transparency, reproducibility, clarity, and memorability. Advances in Methods and Practices in Psychological Science, 1, 403–14. https://doi.org/10.1177/2515245918754485

References

Appelbaum, M., Cooper, H., Kline, R. B., Mayo-Wilson, E., Nezu, A. M., & Rao, S. M. (2018). Journal article reporting standards for quantitative research in psychology: The APA Publications and Communications Board task force report. American Psychologist, 73(1), 3. https://doi.org/10.1037/amp0000191
Barnett, A., & Doubleday, Z. (2020). The growth of acronyms in the scientific literature. eLife, 9, e60080. https://doi.org/10.7554/eLife.60080
Bishop, D. V. M. (2018). Fallibility in science: Responding to errors in the work of oneself and others. Advances in Methods and Practices in Psychological Science, 1(3), 432–438. https://doi.org/10.1177/2515245918776632
Cutler, A. (1994). The perception of rhythm in language.
Doumont, J.-L. (2009). Trees, maps, and theorems. Brussels: Principiae.
Gernsbacher, M. A. (2018). Writing empirical articles: Transparency, reproducibility, clarity, and memorability. Advances in Methods and Practices in Psychological Science, 1(3), 403–414. https://doi.org/10.1177/2515245918754485
Hardwicke, T. E., Mathur, M. B., MacDonald, K. E., Nilsonne, G., Banks, G. C., Kidwell, M., Mohr, A. H., Clayton, E., Yoon, E. J., Tessler, M. H., Lenne, R. L., Altman, S. K., Long, B., & Frank, M. C. (2018). Data availability, reusability, and analytic reproducibility: Evaluating the impact of a mandatory open data policy at the journal cognition.
Hassell, M. P., & May, R. M. (1974). Aggregation of predators and insect parasites and its effect on stability. Journal of Animal Ecology, 43(2), 567–594. https://doi.org/10.2307/3384
Heard, S. B. (2016). The scientist’s guide to writing: How to write more easily and effectively throughout your scientific career. Princeton University Press.
Hetherington, J. H., & Willard, F. D. C. (1975). Two-, three-, and four-atom exchange effects in bcc $^{3}\mathrm{he}$. Physical Review Letters, 35(21), 1442–1444. https://doi.org/10.1103/PhysRevLett.35.1442
Heycke, T., & Spitzer, L. (2019). Screen recordings as a tool to document computer assisted data collection procedures. Psychologica Belgica, 59(1), 269–280. https://doi.org/10.5334/pb.490
Hoekstra, R., & Vazire, S. (2020). Intellectual humility is central to science [Preprint]. https://osf.io/edh2s
Holcombe, A. O., Kovacs, M., Aust, F., & Aczel, B. (2020). Documenting contributions to scholarly articles using CRediT and tenzing. PLOS ONE, 15(12), e0244611. https://doi.org/10.1371/journal.pone.0244611
King, S. (2000). On writing: A memoir of the craft. Scribner.
Knuth, D. E. (1992). Literate programming. Center for the Study of Language; Information.
Kupfer, J. A., Webbeking, A. L., & Franklin, S. B. (2004). Forest fragmentation affects early successional patterns on shifting cultivation fields near Indian Church, Belize. Agriculture, Ecosystems & Environment, 103(3), 509–518. https://doi.org/10.1016/j.agee.2003.11.011
Lakens, D., Scheel, A. M., & Isager, P. M. (2018). Equivalence testing for psychological research: A tutorial. Advances in Methods and Practices in Psychological Science, 1(2), 259–269. https://doi.org/10.1177/2515245918770963
Lewis, M. (2016). The undoing project: A friendship that changed the world. Penguin UK.
Nuijten, M. B., Hartgerink, C. H. J., Assen, M. A. L. M. van, Epskamp, S., & Wicherts, J. M. (2016). The prevalence of statistical reporting errors in psychology (1985–2013). Behav. Res. Methods, 48(4), 1205–1226.
Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2012). A 21 word solution. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.2160588
Stang, A., Jonas, S., & Poole, C. (2018). Case study in major quotation errors: A critical commentary on the NewcastleOttawa scale. European Journal of Epidemiology, 33(11), 1025–1031. https://doi.org/10.1007/s10654-018-0443-3
Strand, J. (2021). Error tight: Exercises for lab groups to prevent research mistakes. PsyArXiv. https://doi.org/10.31234/osf.io/rsn5y
Young, H. J., & Young, T. P. (1992). Alternative Outcomes of Natural and Experimental High Pollen Loads. Ecology, 73(2), 639–647. https://doi.org/10.2307/1940770
Zinsser, W. (2006). On writing well: The classic guide to writing nonfiction (30th anniversary ed., 7th ed., rev. and updated). HarperCollins.