These simple observations motivate a discussion of the fundamental question: should scientific negligence feature in definitions of scientific misconduct, and what are the ethical trade-offs involved in doing so? We argue that there is a strong rationale for including negligence provisions in scientific codes, in virtue of how such provisions (together with standards of competence) are needed to justify trust in both the claims of individual scientists and the claims of the scientific community. However, we identify dangers of over-inclusion (too many negligence provisions) as well as over-enforcement, where complaint procedures against very minor infractions are pursued. The inclusion of negligence provisions in codes of conduct thus must treat carefully to minimize these dangers.
4.1 Trust and The Rationale for Negligence Provisions
In this subsection we show how particular structures of autonomy and trustworthiness characterize the ‘logic of professionalism’ [13], and how this implies that negligence provisions fulfil a crucial role for professions. With specifying negligence as a category of misconduct, an important reason for trusting the professions is absent. We will then argue that this rationale for professional negligence provisions applies in the context of scientific research as well.
4.1.1 The Rationale for Professional Negligence Provisions
The logic of professionalism is an approach towards organizing work where the practitioner and the community of practitioners are given a large degree of autonomy to carry out the ‘work’ as they see fit [13]. Real occupations partake in this logic to varying degrees, and typically mix in two alternative logics as well: the ‘logic of bureaucracy’, where managers control what work is carried out, and the ‘logic of markets’, where clients or customers control what work is carried out. Sociologists have often taken medicine to be a paradigmatic example of an activity organized according to professionalism in the sense that there is a large degree of operational autonomy [15, 16]; however, it is clear that also physicians operate under some bureaucratic control (e.g. audits) as well as market incentives (and increasingly so in the previous decades: [17].
In the logic of professionalism, professional autonomy and trust in professionals are two sides of the same coin. Trust in professionals means that relatively few demands for control and transparency made of professionals (and insofar as they are made, they can be linked to an erosion of trust; see also [18]). Hence trust is what allows professionals to maintain their autonomy. Conversely, professionals’ autonomous decision-making does not mean that the decision-making is voluntaristic, but rather that it is guided by what Freidson called “service ideals” [13]. For instance, physicians and medical professionals are typically oriented to the ideal of care, and this implies, inter alia, that in their decision-making they should prioritize care for the patient are prioritized over self-serving incentives. In this way, because professional the autonomy is grounded on service ideals, trust in autonomous professionals is justified. Professional autonomy and trust in professionals are inextricably bound.
This fundamental connection between trust and professional autonomy is crucial for understanding the key role that negligence provisions play in a well-designed professional codes of conduct. Negligence is the lowest form of intent (after ‘purposely’, ‘knowingly’, and ‘recklessly’), and is the only form of intent where the undesirable outcome was not known as a possible outcome by the transgressor (for more discussion, see [9]). However, the transgressor is culpable because he or she should have known about the possibility of the bad outcome. Thus, negligence provisions express the responsibility a professional has for knowing about the possibility of such outcomes, and for taking precautions for avoid such outcomes.
Concretely, negligence provisions therefore refer to the awareness that professionals need to have of their competences, and in particular to the awareness of lack of competence. If competence is lacking, the professional must either take steps to shore up their competences (e.g., undergo additional training or education), or else to refrain from undertaking services that lie outside the purview of their competence, and refer to client or patient to a colleague. How such provisions are formulated in actual codes of conduct will be discussed in the next section.
Negligent action by a professional thus is a breach of trust in a professional. One can trust a person to have good intentions, but trust in professionals is more than trust in their good intentions: it is trust in their competences, and ultimately, trust in the knowledge that underlies those competences [20]. Any client or patient is placed in a vulnerable position, and when the professional does not live up to reasonable standards, this is then a sign that trust in this particular professional was misplaced. Categorizing negligence as a type of transgression of integrity norms recognizes that the professional was culpable for not being trustworthy – and not the client/patient for placing trust.
Conversely, if a professional body would not recognize negligence as a transgression, then professionals can no longer ask the trust of their clients or patients. After all, patients or clients themselves, since they lack the requisite knowledge and training, cannot evaluate how competent a given professional is. If a patient/client can no longer expect that a professional has the intention for providing the best service possible, and for taking all the reasonable steps that ensure that the services provided meets all reasonable standards of competence. Thus, giving (medical) professionals discretion without holding them responsible for negligence incoherently gives them rights without holding them to the naturally associated responsibilities. Or to put the same point, but more emphatically: if negligence is not problematized, this implies nothing less than the collapse of the logic of professionalism.
4.1.2 The Rationale for Scientific Negligence Provisions
The fact that scientific codes of conduct do not have negligence provisions would only be problematic if scientific research should adhere to a logic of professionalism. Elsewhere it has been argued that it in fact should [2], and the crux of Desmond’s argument was to point to the unavoidable role played by individual judgment in the activity of scientific research (see also [21, 22]). The day-to-day activities of formulating hypotheses and developing methodologies, overcoming challenges, analysing and interpreting data: in all of these, scientists have a large degree of operational autonomy whereby they cannot simply follow formulaic methodologies but must use their individual discretion. Moreover, many other scientific activities that support the scientific community – refereeing, supervising, editing – involve scientists following their individual judgment rather than a set of rules (as would be in a logic of bureaucracy).
This type of individual autonomy is not just desirable for scientific research; it is also inevitable. A given research project may be so specialized that may be difficult for even scientific peers to understand, let alone non-scientists. In this sense, the degree of autonomy that a scientist has in conducting his or her work is certainly not less than the autonomy a physician receives for making diagnoses and prescribing (or at least proposing) treatments.
In sum, an important aspect of the logic of professionalism, namely individual autonomy, is part and parcel of scientific research. Note, however, that this does not imply that individual autonomy is absolute and that the decision-making of individual scientists cannot be influenced by non-scientific factors. This is not the case: the preferences of granting agencies, private companies, and governments may influence scientific decision-making (e.g., in choice of research topic) [1]. The lesson here is that individual autonomy can be curtailed – the logic of professionalism may be mixed in with other organizational logics – but that scientific research still depends on individual scientists retain a large degree of operational autonomy comparable to other professionals.
Furthermore, part of what it means to act with scientific autonomy is to take responsibility for negligence. If negligence is not explicitly understood as a transgression of integrity norms, then trust in scientists is undermined, as will now be argued. To show this, we disambiguate individual trustworthiness from collective trustworthiness.
Individual trustworthiness. Consider following scenario. Assume that scientist A claims to know , for instance some claim about the degree of climate change. When can a non-scientist B trust A? One condition for trust is that B should be able to assume that A is not negligently claiming , i.e., that A has taken all reasonable precaution to avoid being wrong. If A has not taken reasonable precautions to avoid error, and moreover, if it is common knowledge that scientists do not take such reasonable precautions, then trust in A’s assertions is not justified.
This is a fortiori the case when it comes to expert claims (e.g., concerning climate change, or COVID-19). In such scenario’s the A’s claim may have far-reaching impact on social norms and thus B’s life. For B to trust A, B should be able to assume that A has taken reasonable precautions to avoid negligence. However, note that this does not mean that B should have to assume that A has avoided error. Error as such is inherent to scientific research, and need not be a signal of a lack of trustworthiness. Only negligent error can play that role: a negligent error is one that a scientist could have avoided if he or she had taken reasonable precautions that moreover can be expected of scientists of comparable training.
In sum, given the inevitable autonomy with which research (and supporting activities) must be conducted, non-specialists often cannot evaluate claims for themselves, but must make the decision to trust or not to trust. If the specialists do not take all reasonable precautions to avoid error, then trust in individual specialists not justified. Needless to say, such an erosion of trust would be damaging in many different ways to the scientific community and also to society as a whole. Hence safeguarding individual trustworthiness is the first reason why it is important to include clear stances on negligent research in scientific codes of conduct.
Collective Trustworthiness. More important perhaps is collective trustworthiness, i.e., the trustworthiness of a community of scientists. Here the trust of a non-scientist in the community of scientists is less predicated on the intentions and competences of individual scientists, but rather on the value norms and standards of competence of the community.
Clear standards of competence are one of the prerequisites to be able to define what precautions are ‘reasonable’, and hence to be able speak of negligence when such precautions are not taken. In other words, without standards of competence, it becomes impossible for a community to identify negligence in the action of individual members. The distinction between ‘honest error’ and ‘negligent error’ becomes impossible to make in any grounded way. Thus, without standards of competence, that can only be defined at the level of the community, then the trust in scientists cannot reach the level of trust that clients/patients have in professionals.
The absence of collective value norms allows for research cultures to arise, where integrity norms are habitually transgressed. Such ‘perverse’ research cultures thus normalize transgressions of integrity norms. Examples include the exaggeration of research findings and authorship practises [24]: questionable research practises (QRP) that yet do not constitute flagrant transgressions (FFP). Yet, strictly speaking, if there are clear integrity norms, such QRPs, even if the scientists involved may have thought of such behaviors as ‘normal’, constitute a category of negligence. Here also negligence provisions become important, since the discovery of perverse research cultures by non-scientists would put a subsection of the scientific community in a bad light, and erode trustworthiness.
By contrast, when there are clear standards of competence and clear value norms, then the trustworthiness of individuals and that of the community can become dissociated to a certain extent. If an individual commits some kind of negligent misconduct, then this need only reflect on the trustworthiness of the individual rather than on that of the community.
In sum, negligence norms enhance individual and collective responsibility for competent, trustworthy research. By contrast, if negligence would not be problematized by the scientific community, then trust in assertions by scientists may not be entirely unjustified, but it would not be justified to the degree that clients/patients trust professionals. This, in turn, would motivate further curtailments of scientific autonomy. Hence the rationale for problematizing scientific negligence in codes of conduct: to help safeguard justified trust in the scientific community.
4.2 Clarifying the Responsibilities for Avoiding Negligence
In the previous section, the outlined rationale gave support for including statements in codes of conduct on the importance of avoiding negligence. However, how such statements should be formulated, i.e., what precise responsibilities such statements should entail, is a different, and more difficult question.
In the results section we showed how when, in the few cases that negligence provisions are included, this detail is lacking. What precise responsibilities do scientists have in avoiding error and transgressions of integrity norms? Such responsibilities are cited often in medical codes of conduct; we discuss here what they could be for scientific codes of conduct.
4.2.1 Standards of Competence
A finding of negligence means that a mistake was made even though a “reasonable precaution” could have avoided that mistake. Such “reasonable precautions” are defined relative to the standards of competence that can be expected of similarly certified professionals (or scientists). Hence, stipulating responsibilities to avoid negligence is only possible if there are also relatively standards of competence according to which the work of the professional or scientist can be held.
One fundamental difficulty in stipulating ‘standards of competence’ for scientists is that – to put it simplistically, but in this context, sufficiently accurately – science is a knowledge-producing activity, whereas professional activities are knowledge-application activities. It is true that there is important overlap between professional activities and scientific research: this is why, for instance, many of the most specialized professionals in medicine (or law, or engineering) also have appointments in academia. Scientific research thus not only provides the knowledge required for professional practise, but can also be stimulated by needs and gaps in the professional practise. Nonetheless, while a single person may engage in both scientific research and a professional practise, the two activities are conceptually distinct.
Thus, because science is a knowledge-producing activity, it may not always be possible to identify clear standards of competence in as much detail as it might be for a medical professional. The standards of competence are changed by the activity of scientific research itself. Some philosophers have argued that what counts as ‘competence’ (in the sense of being able to apply a valid methodology) is only relative to certain paradigms [25]; others have gone so far as to argue that there is simply no standard scientific methodology and hence no standards of competence [26].
While one must surely grant that there is no universally applicable standard scientific methodology, this does not mean one could not identify standards of competence more limited in scope. In fact, such standards are in fact implicitly invoked in the process of peer review: without such standards, submitted work could not be judged to be of “good quality” or “poor quality”. One could plausibly surmise that standards of competence in fact are tacitly known by practitioners of a sub-field: each sub-field has particular expectations on what a well-written or well-constructed article looks like, what the appropriate level of specificity of the argumentation is (if relevant), how the data are represented (if relevant), what types of literature are engaged with, and so on.
It may not be possible to formulate all such standards with great precision: some may necessarily remain more tacit, and subject to individual discretion, than others. Nonetheless, a similar remarks could be made of professional standards: different specializations may have different standards, where not all standards are necessarily codified with precision. Yet that has not prevented standards of competence from being specified and negligence provisions from being included in professional codes of conduct.
The importance including negligence provisions goes beyond whether or not standards of competence can be defined. Precisely defined standards are important for third parties being able to identify acts of negligence: precise definitions give verifiable conditions or criteria which may or may not apply in a given situation. A loosely defined standard, with only a few of such criteria, thus may not allow for findings of negligence .Yet, the importance of defining standards of competence lies not in prosecuting bad research, but in safeguarding justified trust in science.
As an illustration, consider a competence like ‘critical thinking’: an example par excellence of a competence that would be difficult or impossible to define by means of verifiable criteria. This would mean that third parties could never verify whether a particular scientist or project lived up to the relevant standards of competence. Yet including negligence provisions with regard to a non-verifiable standard would not be useless, because it would help create a healthy research culture and thus help safeguard trustworthiness.
Yet some standards are eminently codifiable, especially those concerning procedures. Actions whereby such procedures were not followed, whether by conscious intention or not, would then be culpable as procedural negligence. Examples of such procedural negligence would include:
- Making easily avoidable errors when handling data, such as mislabeling samples.
- Publishing without first trying to replicate one’s results when such replications are relatively easy to do.
- Carrying out statistical analysis despite not having the requisite competences, leading to wrong statistics being applied, P-values not being interpreted correctly, etc.
Many of the support activities in the scientific community also follow relatively clear standards of competence. Institutions (i.e., administrators and the procedures they set up) have responsibilities, and when these are not met, institutions may be found to have acted negligently. Individual researchers acting in capacities of referee, editor, or supervisor also must follow relatively definable standards, with associated negligences. For instance, a referee may review an article or proposal negligently when, for instance: insufficient time or energy is devoted to the task, thus not taking reasonable precautions to avoid forming wrong judgments of the article or proposal; not refusing to review the article or proposal if lacking in competence. An editor may neglect his or her responsibilities by, for instance: accepting peer-reviews by scientists who clearly lack the necessary competences; asking peer-reviewers to make editorial judgments; not specifying clear standards or expectations for referees. Finally, a supervisor may be guilty of negligence if, for instance: he or she does not take reasonable precautions to avoid members of the research team from taking large risks; or if the supervisor does not offer sufficient guidance to supervisees.[2]
In sum, the elucidation of standards of competence is not easy, and given the novelty of research it will not be possible to bring all aspects of scientific research under purview of such “standards”; yet it is possible, since such standards are already tacitly present, especially in many of the support activities in the scientific community. Without such standards scientists could not assume the roles of referee, editor, or supervisor, where other scientists are supported in reaching certain standards of competence. Formulating these standards, and integrating them into codes of conduct (are currently wholly implicit, and explicitly formulating them would play an important role in emphasizing the importance of avoiding negligence.
4.2.2 Reasons for Caution
We would like to close the discussion with addressing some of the dangers of negligence provisions. Introducing inappropriate standards of competence, where unwitting scientists would be found guilty of misconduct in harsh and inappropriate ways, would have a similar “chilling effect” that criminalizing even egregious research misconduct would likely have [27]. To avoid such outcomes, caution needs to be exercised in both defining the standards as well as in reflecting what the goal is of sanctioning misconduct.
Ethical clarity vs. Sanctioning. First, codes of conduct aim to provide clarity about ethical principles of research (and of research integrity), and rarely if ever give detail about the sanctioning system attached to integrity transgressions. A sanctioning system is constrained by a whole host of other ethical and legal principles, such as the principle of proportionality. Hence, just if an act of negligent research would be considered a transgression against the norms of integrity does not mean that said act should be punished in a way that even greater damage to science would be occasioned.
Here is an example: just because a researcher may exaggerate his or her research findings does not mean that a formal institutional procedure regarding negligence should be started. Such procedures are fraught with uncertainty and often involve reputation-loss for the accused, and hence can lead to future lack of trust in one’s research partners (who blew the whistle?) and/or resentment towards institutional structures. The importance of the principle of communicating one’s research findings truthfully can also be conveyed by informal questioning by colleagues. There is a balance between a culture of credit-maximization, where anything individuals can get away with they will try to get away with, and a culture of blame and fear.
In this regard, having clear principles about negligence would best be embedded in what in professional contexts is known as “just culture”, which is often described as an organizational culture where the guiding response to mistakes is not “who is to blame” but “what went wrong” [28, 29]. It is clear that if also many QRPs would be classified as (negligent) misconduct, then the scientific community would need to seek to avoid a blame culture (a culture that is unfortunately suggested by the emphasis on FFP transgressions in many codes of conduct, since these almost always involve conscious intentions) and take all reasonable steps to strengthen a just culture.
Hence there seems to be no principled reason why a good and appropriate sanctioning system could not be put in place. Given the success of the just culture approach in other domains, it seems plausible that an appropriate sanctioning system could be put into place. Moreover, the main message of this paper is that a clear stance on negligence at the level of ethical principle is desirable for ethical-conceptual reasons: only in this way can the trustworthiness of scientists and science be justified, as well as large degree of autonomy that scientists and science receive. How precisely negligence provisions in codes of conduct would be translated into sanctioning systems, and how such sanctioning systems would be enforced, are very different questions, and a full treatment is outside the scope of this paper. This should be separated from achieving ethical clarity.
Negligence Provisions and Distrust. Not only do sanctioning systems need to be chosen with care, but also the standards themselves built into negligence provisions must be chosen with care. If too much precaution is considered as ‘reasonably expected’, this may have the perverse effect of decreasing trust between scientific agents.
As an illustration, consider a situation where institutions can be held responsible for negligent conduct of an individual scientist, member of the institution. Then such institutions, fearful of prosecution in courts of law, will have an incentive to exact greater degrees of bureaucratic control of individual scientists. This would be an example where too much responsibility is expected of institutions, and that this has the perverse effect of curtailing individual autonomy. Another, similar example would be a negligence provision where supervisors would be expected to double check work of lab members. Thus, if a lab member were to be found of misconduct, then the supervisor could be considered culpable through negligence. This then would lead to incentives to further control lab members.
The reasonableness of the standards is also crucial for the effectiveness of RI training. Such training, at least if effective, allows integrity norms to become intersubjective social norms: where I know the norm, you know that I know the norm, I know that you know that I know the norm, and so on. By coordinating expectations in this way, setting norms – even in a code of conduct – can impact patterns of behaviour even without explicit sanctioning systems. Yet, if the specified norms are unreasonable, and if it becomes intersubjective knowledge that they are unreasonable, then integrity norms lose all their normative force and become window dressing that is divorced from actual research culture.
In sum, great care should be made when adding negligence provisions and to make sure that the associated expectations are reasonable, and in particular that setting such standards would not have detrimental effects on trusting relationships between colleagues and instead can foster a flourishing research culture.