HOW TRUSTWORTHY? An Exhibition on Negligence, Fraud, and Measuring Integrity
Eagle Nebula, M 16, Messier 16 // NASA, ESA / Hubble and the Hubble Heritage Team (2015) // Original picture in color, greyscale edited © HEADT Centre (2018)
The goal of this exhibition is to increase awareness about research integrity. The exhibition highlights areas where both human errors and intentional manipulation have resulted in the loss of positions and damage to careers. Students, doctoral students, and early career scholars especially need to recognize the risks, but senior scholars can also be caught and sometimes are caught for actions decades earlier. There is no statute of limitations for breaches of good scholarly practice.
This exhibition serves as a learning tool. It was designed in part by students in a project seminar offered in the joint master’s programme on Digital Curation between Humboldt-Universität zu Berlin and King’s College London. The exhibition has four parts. One has to do with image manipulation and falsification, ranging from art works to tests used in medical studies. Another focuses on research data, including human errors, bad choices, and complete fabrication. A third is concerned with text-based information, and discusses plagiarism as well as fake journals and censorship. The last section covers detection and the nuanced analysis needed to distinguish the grey zones between minor problems and gross negligence in case of fraud.
Detecting fraud is important, and it is equally important for people in decision-making roles to understand when actual long term damage to scholarship has taken place, and when scholars have done something wrong but without negative consequences in the long run to both science and scholarship in general. Students especially need to learn the distinction between error and intent, negligence and gross negligence. Everyone makes mistakes, but undermining the reliability of scholarly results harms the whole scholarly community.
Images & Integrity
Images play an important role in many scholarly disciplines – in medical research as well as in art history. Images open up the invisible, they represent data and facilitate the understanding of complex contexts. Their power of persuasion is much praised – but the ability to manipulate images digitally has increased the distrust in them.
Faked images can falsify reality, simulate results, or show an idealized world. This part of the exhibition shows the different ways in which images can be manipulated and how difficult it is to detect errors or to expose forgeries. Examples are drawn from medical research, astronomy and art history, as well as from photo competitions. Look closely – what seems deceptively real is not necessarily authentic.
1. The "Greatest" Forger
2. The Dark Side of the Moon
4. Experts Against Experts
5. What Remains of the Master's Hand?
6. Disappearing Data
Text & Integrity
Text is ubiquitous in the academic world, including in the natural sciences, where some scholars characterise the descriptions in articles as mere advertising for the real results in labs or databases. Nonetheless verbal descriptions still matter. Citations in academic papers have themselves become an industry, and some universities use citation counts as a factor in hiring, tenure, and promotion decisions. This has tempted some people to increase their citation count by citing themselves frequently, sometimes arguing that they want to avoid self-plagiarism this way. Plagiarism itself has become a widely discussed topic with commercial software systems to discover plagiarism, and with people who set strict rules about copying for cases that once represented a legitimate reuse of the language of a topic. Actual plagiarism is an ethical and potentially a copyright issue, but even serious copying does far less damage to the fabric of scholarship than data falsification.
The drive to publish has created a market for predatory journals. These journals charge for publishing through Article Processing Charges (APCs), and attract authors by promising fast turnaround via an abbreviated or entirely fake review process. The contents of these publications are not necessarily false, but they have skipped the benefit of a review process meant to catch errors and to detect untruths.
At the other end of the spectrum is censorship, which takes a variety of forms. Sometimes it involves local governments rejecting texts for libraries or schools, and sometimes it becomes a ban on works that express unwanted viewpoints or opinions. Not all forms of censorship are bad, such as those that restrict incitement to violence and hate-crimes, but it can also affect information about, for example, evolution, or lead to a ban on novels.
10. What Counts as Plagiarism?
11. Text Recycling vs. Self-Plagiarism
Data & Integrity
One of the principles of modern scholarship is the ability to build on past research results. This is true for the humanities as well as for the social sciences and especially the natural sciences. Fake or unreliable data undermines any new work that tries to build on it. The cases in this section all involve some form of false data that reached public notice, but not all were intentional fraud. Contamination of samples and disagreements about the interpretation of data played a role in some. More problematic are other cases where process-falsification undermined the credibility of the data, such as fake peer reviews or falsified consent forms. There are also cases where notable researchers just grew weary of the work involved in creating real data and knew enough about how people in the research world thought to reverse engineer the process in order to create data that reflected the desired results.
It is hard to say how much false data is part of the current scholarly record. Replication studies are not popular with journal editors or scholars, and many experiments go untested. Scholarly data are too infrequently made available for re-analysis, and recreating the data themselves is expensive and time-consuming. Some attempts have been made to create fraud detection tools, but the tools tend to be discipline-specific and often rely on statistical expectations whose fit to a particular case must always be examined.
12. Misidentified Cells
13. Fiction Science
14. Answers Before Questions
16. Monkey Business