There are roughly two models most western criminal justice systems may be said to align with: the crime control model (CCM) and the due process model (DPM).
The built-in aversion to false negatives of the former translates into a more conviction-prone system, one that prioritizes victims’ and society´s interests (as it is sometimes put); whereas the ingrained aversion to false positives of the latter translates into a more acquittal-prone one, that is: a system that emphasizes the interests of those accused of a crime.
Within (liberal) legal circles the DPM is treated as the accepted, and even the undisputed doctrine, meaning that the safeguards against a false conviction it endorses are not to be called into question, on pain at least of being ostracized as politically incorrect.
Laudan’s Challenge
But being politically incorrect in this sense was precisely what the acclaimed philosopher of science Larry Laudan did when he advanced his very influential legal epistemology research agenda in his 2006 book Truth, Error, and the Criminal Law. And due to his project´s DPM-repellent aura, people with a legal background, initially myself included, approached it with suspicion and discomfort.
A discomfort mainly rooted in Laudan´s epistemological prescriptions, namely: to restrict and even eradicate practices he viewed as truth-thwarting (but that we lawyers see as complying with a set of non-negotiable rights of the accused); practices such as, to list but a few, excluding relevant evidence collected in illegal searches and seizures, excluding non-mirandized confessions, allowing the accused to remain silent, instructing the jury to infer nothing whatsoever from that silence, prohibiting acquittals to be appealed, and requiring the State to prove its case “beyond all reasonable doubt” (BARD), a standard of proof which Laudan deemed unreasonably exacting.
The Epistemological Argument: Truth and Error
I managed to tame my initial unease because, I reckoned, after all Laudan was the expert in applied epistemology, and he just might be right; maybe we have gone too far by adhering to extreme versions of the DPM, and maybe there are rational criteria to determine when this is so.
And indeed Laudan´s reasoning behind the abovementioned prescriptions seems plausible enough. In general lines, it surely makes sense stating, in principle at least, 1) that the more relevant evidence the fact-finder has access to, the more likely she is to find out the truth about an alleged crime, hence Laudan´s rejection of evidence-exclusionary practices; or observing 2) that the closer the criminal standard of proof is to that of the “preponderance of the evidence” (which requires for a hypothesis to be asserted as proven, a likelihood only slightly greater than 50%), the better we do a) at containing the exponential growth of false acquittals that stems from the policy of granting the benefit of the doubt to the defendant, and b) at reducing mistaken verdicts in total (both false negatives and false positives combined) in comparison to the errors ever more stringent standards would produce; hence the rejection of the BARD evidential threshold, and Laudan´s early suggestion to replace it with the standard of “clear and convincing evidence” (CACE), which roughly requires 70% likelihood of guilt.
More specifically, with regard to deterrence, preserving the moral integrity of courts, or preventing prejudiced convictions as frequently offered rationales for some exclusionary rules, Laudan pointed out 1) that there are surely more efficient ways to deter the police from abusive behavior than leaving certain illegally collected but nonetheless relevant (and may be highly relevant) evidence out of the trier of fact´s scope; and 2) that courts surely don’t seem to care about their moral integrity when they end up admitting previously excluded evidence just because the accused decided to testify at trial. For its part, 3), instead of paternalistically shielding the jury from exposure to evidence with grotesque or too graphic crime-related content fearing jurors will automatically take it as dispositive, we should consider getting rid of that archaic institution if indeed its evidence-assessment skills are compromised so easily.
Laudan’s Later Work and Escalation
Ten years later, in 2016, came Laudan´s second major (and unfortunately last) book in the field, entitled The Law´s Flaws; and the alarms sounded even louder to those with DPM-sensitive ears. There Laudan allegedly provides empirical support for his diagnose of the American criminal justice system, to wit: that it suffers from a serious case of false acquittal infestation, or, put differently, from an exacerbated insistence to cover even the more clearly guilty and dangerous recidivist defendants with (legal) innocence feathers. Dealing with such an urgent and harmful situation would require urgent measures like, among others, setting the standard of proof at an even lower point than CACE, which Laudan hinted at by saying that (based on his proposed metrics) a false positive is only 2 times costlier than a false negative (as opposed to 10 times costlier, which is the common wisdom).
As highlighted by most of his critics, in both the previous diagnose and the suggested treatment Laudan seems to have proceeded with haste, and more specifically from my perspective, under the effects of a confirmation bias; a bias that led him to overestimate the frequency and costs of false acquittals and to the opposite pertaining to false convictions. And all this in order to artificially corroborate a view already present in his first book: that the American criminal justice system was excessively acquittal-friendly, and hence, too tolerant of false negatives.
The Role of the Expected Utility Error (EUT)
To my mind, said view was the result of Laudan´s commitment to correctly employ the expected utility theory (EUT) as the only way to provide the decision on how demanding the criminal standard of proof has to be, with a rational basis. I mentioned a commitment to “correctly employ” the EUT due to Laudan´s accusation of it being frequently misused in that the focus had been solely on the costs of false positives and false negatives and not on the utilities of the four outcomes (that is, including true convictions and true acquittals as well). According to Laudan, taking into account the benefits of correct trial outcomes too cannot rationally warrant such an exacting standard as BARD; only lower thresholds.
But once fixed, whatever that lower standard might be, it should be regarded as incorporating all the benefit of the doubt we want to give to the defendant. With this principle –named the “principle of indifference” by Laudan–, he meant that no other device, procedure, or rule of evidence should be used to tilt the scales of justice in the defendants’ favor. Fixing the standard of proof, in other words, ought to be the only means we make recourse to in order to channel our desire to protect the innocent from being wrongly convicted into the system on pain of losing an essential issue over which a societal consensus may arise, together with the opportunity to empirically test whether the collectively agreed-upon ratio of errors (underlying the settled standard of proof) is being obtained.
Laudan’s Epistemological Project and its Limits
With this mindset, it is not surprising that Laudan´s original epistemological project to identify the justice system´s rules that increase the risk of error mutated to a one-sided or partial effort focused exclusively on the rules that help structure the evidence-admission and the trial phases (leaving criminal investigation and plea-bargaining out of the picture); an effort aimed at denouncing only those rules that increase the risk of a false negative. According to Laudan, this is a risk already made greater than that of a false positive precisely by the decision to set in place a standard of proof higher than the preponderance of the evidence, which is why any additional step in that direction (despite its possible status as a right of the defendant) is deemed by Laudan an epistemically dysfunctional feature of the justice system, one that, as previously stated, makes it excessively acquittal-friendly.
Nonetheless, I claim that the roots of this diagnose go deeper and land on a set of background assumptions made by Laudan, one of which, and a very crucial one, is that incapacitating convicted defendants with recidivist inclinations (namely for Laudan, those with criminal records) is what most justifies the infliction of legal punishment in its incarceration modality. Therefore, Laudan would suggest, we should unleash this incapacitation potential by identifying and eliminating whatever rule that is currently holding it back.
A New Avenue: The Moral Communication Aspect of Punishment
Now that we have spotted the connection between justificatory accounts of criminal punishment and legal epistemology’s research agenda, I am hopefully in a better position to tell you a bit more about my current academic interests. They have to do with exploring how emphasizing a different justifying aspect of punishment might have an impact on legal epistemology.
More specifically, I have in mind the way legal epistemology could be re-shaped by highlighting the moral communication aspect of punishment, whereby a moral agent, the State, publicly blames another, the convicted defendant, for her wrong-doing. From this perspective, the latter is not mainly treated as a self-interested prudential entity that understands only of positive and negative stimuli, nor as an incorrigible beast unable to be reasoned with, a being which can only be caged so it cannot keep harming others; but as an agent capable of understanding the reasons why she is being scolded, and capable of repentance, self-reform, and making amends too. The former, for its part, would have to earn the right to publicly blame those convicted so it can become the moral agent speaking on behalf of the victim(s) and the community it purports to be. In doing so, it would have to comply with a demanding notion of “fair-play” and of respecting suspects and defendants while going about investigating, charging, prosecuting, and convicting them.
Thus, legal epistemology´s priority would be to work in tandem with the field of “wrongful convictions” in order to identify their contributing factors and the proposals to ameliorate them. And given that confirmation bias is one of the most comprehensive factors leading to wrongful convictions, legal epistemology could mainly consist of a debiasing project focused at least evenly on the trial and pre-trial phases. The latter, particularly criminal investigation and plea-bargaining practices, have escaped epistemic attention, and unfairly so if we take notice of the increasing tendency to avoid a trial as the normal path to obtain a conviction.
To read more about this and engage in the discussion, check out this recent book: Aguilera, Edgar, Epistemología jurídica: cuestiones, debates y propuestas actuales (Zela, 2025) [LINK]. You can also check out the author’s previous published work on the same topics. [LINK]
My work in this blogpost was supported by the María de Maeztu Unit of Excellence grant CEX2021-001169-M (funded by MICIU/AEI/10.13039/501100011033).
SUGGESTED CITATION: Aguilera, Edgar, “Legal Epistemology’s Research Agenda: Exploring an Alternative Avenue”, FOLBlog, 2025/10/22, https://fol.ius.bg.ac.rs/2025/10/22/legal-epistemologys-research-agenda-exploring-an-alternative-avenue/