Carrie S. Alexander

Domains of Uncertainty: The Persistent Problem of Legal Accountability in Governance of Humans and Artificial Intelligence

Vol. 3
18 September, 2024

Christian Perspectives on Science and Technology, New Series, Vol. 3 (2024), special edition: Artificial and Spiritual Intelligence: Proceedings of the 2023 Conference of the International Society for Science & Religion, with guest editors Marius Dorobantu and Fraser Watts.

Abstract: AI poses a challenge for current legal frameworks, referred to as an “AI liability gap.” Current legal systems based on knowledge, intent, and assumptions of moral agency evolved over hundreds of years, when it was still widely believed that human intelligence was not “natural” but “created” or, to borrow the term used for AI, “artificial.” In fact, it was the suggestion that human minds might be “natural” that provoked a cultural crisis in the late nineteenth century regarding society’s ability to govern humans untethered from divine accountability. This article looks at the way many in late-nineteenth-century England and the United States navigated this cultural crisis, which led to a breakdown in relationship that has persisted throughout the past century and a half and now undermines current efforts to construct meaningful dialogue regarding the effects of AI on humans and society. The article argues for a different approach, one that accommodates and embraces doubt and uncertainty as foundations for meaningful relationship and dialogue, which are essential prerequisites and foundations in efforts to govern AI and address the challenges AI now presents.


This article explores the problem of establishing legal intent in humans and artificial intelligence (AI). First, I discuss the problem of the absence of will and intent in artificial intelligence and the AI liability gap that has formed as a result. This gap is a function of a mismatch between AI, on the one hand, and a legal system that has evolved over hundreds of years to govern humans, on the other hand. In other words, can a legal system designed to govern humans, who are assumed to be capable of something we call “intent” or moral will, govern AI technologies, which are assumed to be incapable of intent or moral will? These assumptions, which currently predominate within legal systems regarding AI and human morality, are subjects of debate by many scholars.[1] The question, for example, “can AI be held morally and legally accountable if it is ‘merely’ artificial?” can be examined in light of the historical assumption, as it came to be embedded in English and American law, that humans were also not fully “natural” but rather had spiritual or metaphysical properties that made them morally capable. If it was humans’ “non-naturalness” or the belief that they were “created” or “artificial” that supported the assumption that humans were morally capable, then why was this, and what legal or cultural purpose did humans’ “artificiality” serve? The article recasts the late nineteenth century cultural crisis regarding theories of evolution not as a debate over whether humans were natural or non-natural (artificial/created/metaphysical), but as an expression of profound discomfort with and disagreement over how to manage the unseen and uncertain domain of the human mind, moral will, and legal accountability.

Then, the article examines doubt and relationship as tools or skills for managing domains of uncertainty and the unseen, such as human intent and AI decision-making. It posits that those adept at using these skills or this type of spiritual intelligence—those who are more willing to doubt, to wonder, to acknowledge uncertainty, and build and maintain relationships with those with whom they disagree, even though traditionally marginalised or ostracised by Christian circles—will be most equipped to engage in the kind of interdisciplinary and creative thinking required to develop systems of law that will be able to manage humans and AI. They will be best positioned to work with society to build useful and relevant narratives for governing humans, corporations, and AI.

Domains of Uncertainty

“Artificial” vs “Natural” Intelligence and the AI Liability Gap

The idea of “artificial intelligence” implies that there is an intelligence that is not artificial, perhaps “natural.” Natural intelligence is often assumed or implied to be human intelligence. And AI is often compared with human or natural intelligence, even though many of these comparisons may be based on misunderstandings and tenuous assumptions.[2]

The idea that humans are natural has been thoroughly critiqued in various ways. For instance, many still believe and take political positions on the belief that humans bear metaphysical origins or value.[3] Also, ecocritical and feminist scholars, such as Donna Haraway and many who followed her, have argued that humans are cyborgs by virtue of vaccines or other technologies on which humans have become dependent.[4] On the basis of these two views, many advocate granting legal value—and therefore legal rights and standing—to unborn human babies/foetuses[5] and embryos, humans in a post- or transhuman age, as well as to the environment and nonhuman animal species.[6] These debates have unfolded in a context where corporations were long ago granted legal personhood despite the apparent ethical and legal problems posed.[7]

With the rapid development of AI, these debates regarding who or what counts as a “legal person” have become more complex, and have taken on a special urgency. The stakes are higher, at least when we[8] think about human rights, and what it would mean to expand to AI rights we tend to reserve for ourselves.[9] When we think about whether AI can or should ever be granted legal status, we should probably consider that we may be talking about the legal rights of our future selves—human-based beings that may be so intertwined with AI that it may be, as it is already becoming, impossible to draw lines between where humans end and AI begins.[10]

As we think about the legal rights we may assign our future selves it will be useful to think about how we have governed our past selves. Modern Western legal systems are deeply rooted in the concept of intent, known in legal terms as scienter or mens rea.[11] Contemporary debates regarding how we will govern AI focus on the current lack of will or intent in AI, and that AI is frequently a “black box” such that however AI makes decisions it can often not be understood by human intelligence, and also cannot be traced back to the intent of the programmers or manufacturers. The legal frameworks we currently have for attributing responsibility for harms through tort, criminal, or civil law are therefore inadequate to govern AI. AI technologies, lacking a moral will, but making decisions apart from their human designers, break the bounds of most modern legal frameworks.[12] This problem has been emerging for some time as computing technologies have grown ever more powerful. However, relative to the centuries over which our legal systems have evolved, the problem we are now facing with AI is new in that it is escalating at a scale and scope that is forcing us to confront the weaknesses that have been part of modern legal systems all along. AI is therefore a new problem riding on top of very deep and old problems that have remained unresolved—despite countless attempts and endless energy to address them.

Our legal frameworks are mismatched to AI not only because they were designed to govern humans, but because our legal frameworks have historically evolved with contradictions over what humans are and how law can or should govern them. Our legal frameworks, resting on assumptions of human moral agency, evolved over hundreds of years, when it was still widely believed that human intelligence was not “natural” but “created” or, to borrow the term used for AI, “artificial.” Many people in the United States, England, and throughout Europe, believed, as many still do today,[13] that humans were not just a bundle of physical parts,[14] but a hybrid of physical and metaphysical elements, some of which could only be explained, it was thought, by theories of divine origin. Indeed, it was the claim that human minds were of natural origin, rather than created or “artificial,” that triggered a cultural crisis in the late nineteenth century in England and the United States. The question that rippled through society for decades was: can humans be held morally accountable if their intelligence is “merely” natural, the result of evolutionary processes? Interestingly, this is the opposite of the question asked about AI today: can AI be held morally accountable if its intelligence is “merely” artificial, the result of human ingenuity and algorithmic processes? Why is it, if we have already answered the first question, that we would have trouble reversing it and going the other way? If humans were thought, once, to be moral only because we were created, nonnatural beings, why is it that we find it such a struggle now to imagine nonnatural beings, which we create, as moral beings?

But looking at legal and cultural history since Darwin, we should first pause before assuming that we have in fact “answered the question” of how humans may still be considered or held morally accountable. Though many still believe that metaphysical endowment, for instance, by a divine entity, is the only means by which a person or being can be capable of moral functioning and accountability, others believe that morality comes from many other possible origins, and indeed, these differences of opinion or belief, often fiercely held, are the root of the problem. But, as this paper will show, much of the pre-Darwinian view of humans as metaphysical beings, and the assumption that this quality makes humans moral, lives on, not only among people of faith, but also in current codified beliefs underpinning legal personhood and human rights. It would be tempting for those coming from a Christian perspective to simply gloss this problem, just as they did in the late nineteenth century, and assert easy answers to these questions. They might insist that humans are not “merely” natural and that they possess metaphysical qualities that make them morally accountable in a way that all other beings are not. They could then easily argue that for God to create humans, who possess mortal bodies with metaphysical qualities or “souls” that are eternally accountable to God, is entirely different from humans creating artificial entities or beings that are in some sense immortal but lack these metaphysical qualities with no apparent and certainly no eternal cost for immoral behaviour. Given these assumptions, it might seem tenable to argue that humans are nonnatural but morally accountable, and AI is nonnatural but not morally accountable.[15]

However, the questions this paper poses are whether such simple assertions, when made in the nineteenth century regarding evolution, or now regarding AI, had or will ever have the effect intended by those making them, namely, of staunching the flow of law and discourse away from faith and human or ecological wellbeing, and also, whether such assertions are a true reflection of what faith actually is. If faith is more than retreating to what is “known,” again and again, then responding to complex questions with rote answers not only erodes relationships by wrongly disenfranchising those who, rightfully, find such answers to be simplistic and unsatisfying, but also does a grave disservice even to those who feel comfortable with these answers. It pretends that to have faith is to short-circuit deep questioning, honesty, and all of the profound and sometimes painful fractures of the “known” and our limitations of knowing that must and do occur as part of any authentic journey of faith. It cheapens faith and turns it into an idol, a shadow of the vibrant, dangerous, and exhilarating undertaking that it actually is.

This paper, by reviewing the historical evidence from the late nineteenth century alongside more recent legal, historical, and theological work, asks us to consider what would happen if those coming from a Christian perspective, instead of resorting to the same reassertions of certainty that many have made to the present day, stepped back from that approach and made room for something else. Instead of firmly shoring up the theological boundaries which they often perceive to be under an onslaught of threats from opposing points of view, even from within their own denominations and movements, what would happen if they attempted to embrace or at least be honest about the very things for which the Christian church purports to be a guide: living and dwelling in unseen and uncertain domains?

The “Natural” Human Mind in Nineteenth-Century Discourse

Darwin, and the many other theorists who argued that humans were “natural” beings, fought an uphill battle against the pervasive idea that humans had been created by God. According to this latter and deeply entrenched view, it was humans’ nonnatural origin (as I refer to it) that set them apart from animals and made humans capable of moral agency and accountability in ways that animals were not. This distinction was no mere theological technicality. European moral and legal frameworks were designed under the assumption that God existed, that humans had some type of spirit or soul that would endure beyond physical life, and would ultimately be held accountable for beliefs, thoughts, words, and deeds within a final divine reckoning or judgment. Moreover, this belief made it possible to situate humans within a stable and dominant position over other elements of nature.

Divine creation of humans and the world conveniently and cohesively explained all four of these elements. If God had created humans with an intellect capable of reason and moral agency or “knowing good from evil” and also endowed them with an eternal soul, then this design left humans bound securely into a web of accountability that could not be easily or ultimately escaped or circumvented. They were (1) morally capable, (2) morally valuable, (3) morally transparent, with the black box of their minds known intimately by God, and therefore (4) morally accountable, if not in this life, then in the life to come. This system, though never static and far from perfect, provided a theoretical, cultural, and legal coherence on which much of English and American society had come to depend by the latter half of the nineteenth century.

Some could accept the idea that human bodies had evolved, but many drew the line at the mind because of the problem it posed for law. Many still held onto the idea that evolution could not explain the human capacity for moral judgment, or what was called “the moral faculty” or “the moral sense,” while meanwhile those who accepted evolutionary principles were busy attempting to prove that it did indeed explain human morality. Some theories seemed to neatly tie together elements of Christian doctrine as well as evolutionary theory, allowing the option to hedge one’s bets and satisfy both religious and new scientific criteria. Unfortunately, most of these evolutionary explanations for morality spiralled into racism and Social Darwinism.[16]

But as scientific thought and discoveries destabilised beliefs about the origin of the human mind, it seems to have felt, to many, as though the four components of the divine-human legal paradigm were being demolished—as though they were so many legs being kicked out from under the cultural stool. James Rachels hints at this problem when he states, “In traditional morality, the doctrine of human dignity is not an arbitrary principle that hangs in logical space with no support. It is grounded in certain (alleged) facts about human nature … the claim implicit in traditional morality is that humans are morally special because they are made in the image of God, or because they are uniquely rational beings.”[17] As a result of these many views, nineteenth-century society entered a state of suspense as evolutionary theories rippled through public, scientific, and religious discourse. Society hung, suspended, on the debate, while the debate hung suspended on the question: if humans had evolved from animals, and their choices were no more than acts of instinct, then how were they to have value or moral capacity, or be held morally accountable for their actions? In this decades-long debate, figures like T. H. Huxley declared that nature could generate thought and moral agency.[18] Others were highly threatened by this view.[19]

Of course, moral and legal accountability for actions did not disappear. On the contrary, medical and legal professionals were hard at work developing new theories and interventions to diagnose and correct moral failings.[20] But worries about the potential for legal and cultural “degeneracy” and chaos were strongly expressed and debated. The election of the outspoken atheist Charles Bradlaugh to a seat in Parliament in 1880 revealed and provoked fierce opposition that flared up in lectures, pulpits, and in print.[21] This issue was a very explicit reminder that the expectation of divine judgment was quite solidly woven into the fabric of English society, and in particular law and governance. The oath, to be valid, was required to be “binding on the conscience” of the person taking the oath and, as Bradlaugh himself put it, this required that the person have a “fear of eternal punishment” if the oath was broken.[22]

It is not surprising, then, that countless speeches, essays, and published journals proliferated during these years debated the shortcomings of either atheism or Christian faith. Supporters of either view sharply criticised and even mocked one another’s logic and ideas. These debates pointed not just to the vague problem of whether humans had a “soul” or a “mind.” Those espousing more traditional views linked the soul’s existence to human moral capability and to the very practical matter of society’s ability to enforce law, collect debts, or forestall the moral and societal chaos they feared would ensue.

For instance, in a lecture to the Church of England Young Men’s Society, entitled, “The Attitude of the Christian Church Towards Atheism,” the speaker, identified only as William Chamberlin,[23] objected to views in favour of “atheism”—no doubt those of Bradlaugh and other Freethinkers—precisely because, if humans were only natural, there was then no God to see or to interrogate the soul. Chamberlin feared that humans would be morally accountable to no one:

After showing that the importance of morality, according to Atheistic reasoning, is confined within very narrow boundaries of space and time, that is to say, it has reference to nothing beyond this life; it is to be tested only by the aggregate amount of happiness which can be realised within these limitations. The writer goes on to say that … in the recesses of his own soul, each man is as much alone as though he were the only conscious thing in the whole universe. No one shall enquire into his inward thoughts, much less shall anyone judge him for them, and so no one except himself can be in any way answerable for them.[24]

While Barton is correct that many of these writings no doubt made liberal use of “quote mining” to offer skewed or straw men views of their opponents, this practice merely underscores the intensity of the perceived threats to Christian theological traditions triggered by new secular beliefs in the fully natural humans. It is not so much whether there was an “actual” threat, but whether these nineteenth-century writers believed there to be one that influenced the tone and tenor of debates over the fully natural human.[25] These writings indicated that if a person had no will or agency and was unable to exercise moral judgment, if humans behaved only by instinct, according to their “programming” or their “animal desires,” then how could society find fault with anyone for their behaviour? Moreover, if no God, no entity, existed that could both endow and “decode” human intent, there would be not only no way of knowing if a human had an intent to cause harm or not—there would be no will at all to evaluate. It is worth noting here that these views echo precisely the concerns raised today that AI cannot be held accountable if it has no will but merely operates as it has been programmed to do.[26] Law would become meaningless.

Persistence of the Metaphysical with Adaptations in the Emergence of a Hybrid Legal System

Despite these worries, legal scholar Ngaire Naffine argues that no hollowing out of legal accountability occurred. As a result of these debates over the nature of human will, competency cases had sharply increased in the late nineteenth century, wherein “defendants sought to prove that their harmful acts and omissions were unintended, involuntary, or otherwise beyond their control.”[27] Judges had responded gradually and pragmatically by, in effect, developing a “default legal person” or “standard man” who was no longer strictly “rational” but could expand to contain whatever emotions or states the defendant manifested. Case by case, courts worked to shore up legal accountability against the backdrop of new theories of the natural human and a failing belief in rationality. But Naffine states that the descent toward Social Darwinism, in part fuelled by works by others such as Herbert Spencer, and the horrific pursuit of eugenics in the late nineteenth and early twentieth centuries culminating in the Jewish Holocaust, ultimately triggered a retreat from the possibilities of treating humans as fully natural under law.[28]

While humans, in law, have maintained the same metaphysical status they had for centuries through the intentionally vague concept of the “sanctity” of human life, promoted in human rights movements after World War II, between the twelfth and the nineteenth centuries the legal system adapted to emerging notions of human reason in one other crucial area of law: evidence. The church and state gradually integrated the concept and procedures for evidence-gathering and analysis into the legal system. Evidence was filtered alongside the continued use of torture and oath-taking in an effort to contend with the unseen and uncertain nature of the human mind and intent.[29] A hybrid legal system thus emerged, one that blended beliefs in the older metaphysical system with newer ideas promoting reason and the natural human. It may have been the extreme slowness of this transition that gave time for the law and society to adapt. It is beyond the scope of this article to recount this adaptation in greater detail. Here it is only important to note that, long before the nineteenth century, the European legal system had already laid much of the groundwork that made it possible for writers like Huxley to finally posit a complete break between human intellect and divine origin—that is, a fully natural human—without provoking a legal crisis. However, this adaptation of legal frameworks toward dependence on human reason was culturally incomplete, and churches or those still adherent to the church’s moral and theological frameworks were least prepared to cope.

Relocating the “Real” Divide from “Artificial” vs “Natural” to Certainty vs Uncertainty

Reviewing this history of the late nineteenth century problem of the human mind and morality reveals many parallels in current debates over AI governance. The anxieties expressed by nineteenth-century individuals over the impossibility of governing an entity that has no will or “intent” and whose decisions and thought processes cannot be overseen are in every way similar to the same anxieties expressed now about AI. While it may be easy to dismiss late nineteenth century concerns with the assertion that those anxieties were based on beliefs in an unseen spiritual domain that was not “real,” such dismissiveness would be a mistake, for two reasons. First, the problem of governing unseen minds, whether human, corporate, or digital, was just as much a real legal and governance problem then as it is today. Distinguishing between those who behaved negligently or maliciously from those whose actions were accidental, or those who tell the truth and those who lie, is an intractable issue that cannot be lightly set aside without doing damage to ideals of justice. Therefore, the facts of late nineteenth-century attempts to accommodate newer scientific discoveries regarding humans provide not just an interesting case study for how we are handling these problems today, but formed the legal foundation that we are working from. Nineteenth-century notions regarding the unseen human mind and will constrain our current legal options, setting the assumptions and bounds that AI is now destabilising. So, even, and especially, if we wish to revise these legal foundations, a firm review and grasp of their historical and cultural development will assist us in that task.

Second, regardless of whether nineteenth-century notions of an immortal soul subject to eternal judgment are real or imagined, the belief in this system, though by many accounts cruel and unethical, appealed to many because of its stabilising effect. The nineteenth-century writers who opposed evolution and secularism did so in part because they believed that their ideas regarding the human mind and soul were true, but also because they believed that, as long as most members of society believed they were true, there would be sufficient incentives to ensure social order and control. It was this narrative that these writers feared was unravelling.

We can apply these two points to current problems in governing AI. First, as with humans, we now have developed an entity that can autonomously produce various effects at scale, but without a moral conscience to inform its decisions.[30] Furthermore, even if it can be programmed to mimic or adhere to moral standards of human behaviour,[31] its decision-making eludes human oversight and the necessary causal links between AI’s decisions and harms that may occur. In both respects, it falls short of legal thresholds for liability. This was precisely the problem faced by late nineteenth-century societies transitioning in their understanding of human minds and morality. The “creative” solution to this problem leading up to the nineteenth century had been to embrace a narrative of the immortal and accountable soul. Evidence-gathering to prove facts and intent has proven to be somewhat effective, albeit highly problematic, in governing humans. But, even so, the legal system has not found a way to wholly depart from notions of human value as rooted in metaphysical explanations, and so currently leans on both metaphysics of value and intent, as well as evidence, to function. Our problem currently is that neither of these methods will work with AI, and we have no clear device, whether narrative or legal, with which to replace them.

To be clear, liability typically depends on two things: the ability to anticipate foreseeable harms that AI technologies may cause, and the ability of other parties, i.e., victims or the state, to prove through evidence that the responsible party failed to foresee a harm that would have been foreseen by a “reasonable” person or, if foreseen, in either event, failed to take appropriate steps to prevent the harm or mitigate against the risk of the harm occurring. Regarding the first requirement, a responsible party must be present who can do the anticipating or foreseeing and mitigating. Currently it is not clear which human, corporate, or state parties would be responsible. Assuming, hypothetically, that any or all of the developers, distributors, regulators, and users may have some part as a responsible party, the potential harms related to AI are often not something that can be reasonably anticipated, because the “ultimate purpose of [AI technology] is to function in an unpredictable manner,” that is, to continue learning or making decisions on its own.[32] Regarding the second requirement, even if those responsible for AI had foreseen the harm, they could argue against this, blame one another, and frustrate attempts by victims or the state to prove in litigation that the responsible parties had foreseen, or should have foreseen, the harm but had failed to prevent it. Also, since AI has no will of its own, and merely follows its programming to behave unpredictably, AI lacks moral culpability. This dilemma potentially leaves victims and the state with uncertain and tenuous methods for holding those responsible for AI accountable for any harms AI technologies cause.[33]

Research is underway to develop or strengthen devices to address this AI liability gap, but the solutions proposed may also fall short. It is important to recall that any adequate device for managing AI governance must be narrative as well as legal. This means that to have a stabilising effect within society, a sufficient amount of the population must believe in the story that a particular legal device or set of approaches will work.

It is beyond the scope of this article to evaluate the numerous proposals being developed,[34] but one example will suffice to demonstrate the ways that AI disrupts the use of common mechanisms for managing liability, both narratively and legally: insurance. It is possible that it will indeed be used to manage AI risks increasingly over time.[35] However, there are also arguments against insurance proposals, because of the same factors that cause the AI liability gap in the first place. The difficulty in foreseeing harms associated with AI complicates the task of anticipating risk and calibrating insurance premiums to those risks. These unknowns therefore undermine profitability. Put bluntly, “The major objective of the insurance company is to reduce risk to the insurance company, i.e., the variability in its income from insurance business”[36] (emphasis added). Insurance companies attempt to accommodate higher levels of uncertainty and risk by raising premiums, limiting or terminating coverage, or using litigation or underhanded or even fraudulent methods to deny claims.[37] High premiums can, in turn, discourage individuals or entities from seeking insurance altogether, and this is especially true when they underestimate the risks of harm.[38] Given the scholarship that shows that individuals already tend to trust digital technologies too readily and underestimate the risks these technologies pose,[39] findings on low-probability high-risk environmental events are pertinent for AI as well. Given all of these points, AI insurance would be most likely to cover lower risks that can be more easily measured, proven, and contained within reasonable premiums and compensation, leaving larger scale and less measurable harms to litigation, for instance, in the decades-long dispute over the meaning and extent of pollution exclusions now being amplified by PFAS claims, or similar disputes over carve-outs and coverage for cyber-attacks.[40] But liability frameworks may break down in litigation for all of these same reasons.

While many are arguing that new laws and regulations must be passed to address the potential and current harms caused by the proliferation of AI and digital technologies, the enforcement of these laws depends on liability frameworks, which, as shown, are likely to fail. But even passing new regulations and laws governing the companies responsible for developing and distributing AI may also be difficult, at least in the US. Some legal scholars argue that Supreme Court case law regarding First Amendment rights has expanded protections of speech and freedom of conscience and religion. However, they predict that this expansion of religious protections will be easily exploited by technology companies protesting regulation.[41]

First, the fraught assumption that a powerful corporation like Google can be construed in any reasonable sense as a human “individual” comparable to Jehovah’s Witness schoolchildren raises again the issue of who and what counts as a legal (and protected) person. Second, the significant overlap between law, religious beliefs, and technologies are converging at the same point they did at the end of the nineteenth century, highlighting a larger, underlying issue in the debates over human and AI governance. The real question that seems to be at issue is not the “nature” of the mind to be governed, but that minds are more opaque than governance can bear. The distinction between “natural” and “artificial” beings—whether human, animal, corporate, or machine—may be less than helpful as we work to understand and resolve the larger problem common across each of these entities: uncertainty. It is not the “naturalness” or “artificiality” that makes someone or something valuable or governable. Rather, it is uncertainty and doubt about the unknown or unseen parts of these entities, including ourselves, that make justice and governance difficult. It is the unknown, or as Bartlett states regarding the use of the ordeal in older forms of law, “situations in which certain knowledge [is] impossible but uncertainty [is] intolerable”[42] that is the “real” divide that perplexes us. We can focus on the divide itself, but more importantly, we should examine our own responses to it.

Doubt, Faith, and Relationship as Tools for Navigating Uncertainty

Doubt and Faith as Tools for Orientation in Domains of Uncertainty

If uncertainty and the unseen characterises AI, and we are again faced with the challenge, just as in the nineteenth century, of governing a being or entity that may elude all of our prior frameworks for governance, then would those holding Christian views be better positioned now than they were then to assist with solving this problem? It is a mystery and a tragedy that many who claim to have the most experience, or expertise, in engaging with the unseen, those who might theoretically be among those most capable of assisting society in facing issues fraught with uncertainty and domains of the unseen or unknown, are least able to navigate them. There are some who possess the requisite skill set, but these may in fact be those who have been traditionally less recognised by, or even ostracised, from communities of faith: those who “doubt” or who have a tendency to probe, muse, wander from prescribed tenets, wonder, and generally ask questions that are seen as inimical to faith. But, while these have been common assumptions historically, need this be so?

Faith, while in some cases defined as a set of prescribed tenets, is rooted in an act that is irrelevant and inoperable—cannot be carried out—in contexts where too much certainty exists. Once certainty is attained, faith ceases to function.[43] Faith is only useful in spaces where certainty eludes our grasp. Faith and doubt are both active postures—living and sustained actions—adopted toward uncertainty, not substitutes for certainty or a static state or object finally reached. Because this assertion may appear controversial among many coming from Christian traditions, it may be important to look at this question in the context of several New Testament texts. For instance, Hebrews 11:1 is frequently translated as, “Faith is the assurance/substance of things hoped for, the conviction/evidence of things not seen.” The Greek word for “faith” used in this text is πίστις. This is the same word used in other cases, such as where Jesus says to those who come seeking grace or healing, “Your faith has saved/healed/made you well.”[44] This word and usage supports the notion that faith operates in the space that precedes certainty or the receiving of the object hoped for. If there were no possibility of doubt in these stories, there would be nothing remarkable about having faith, and indeed, faith would not exist.[45]

Along with his arguments opposing the idea of a fully “natural” human, William Chamberlin wrote:

[The atheist] would persuade us that the surest ground of faith is to be reached by doubting all that has gone before; that the soundest believer is he who trusts nothing but his own doubts.

The fact of a man professing disbelief in God implies that he has a control over his belief and is responsible for it. In doing away with freedom of will and moral responsibility the Atheist practically destroys all the moral elements of our life.[46]

As demonstrated in this quote, doubt has frequently been perceived within Christian history in a negative sense, as something to be overcome or defeated, as a threat to faith. But some scholars have questioned these negative framings of doubt within Christian theology. For instance, Schliesser finds that these framings were likely due to improper translation from the original Greek biblical texts, and that in most of these texts, the word διακρίνεσθαι referred not to internal conflict or wavering, or a “double mind” regarding belief (διστάζω), but to the notion of dispute or separation between self and God or self and others, generally enacted with a resolute and haughty attitude.[47] Given that Teresa Morgan has recently argued that first-century Christians used the notion of πίστις (faith) primarily to build relationships between God, Christ, his followers, and the community,[48] Schliesser’s interpretation makes all the more sense, where it was not “doubt” but rather haughty “dispute” or rifts that were antithetical to faith and the relationships it was meant to support.

Others have explored less negative understandings of doubt, and demonstrated the acceptance of doubt and uncertainty as aids not only to faith, but to participation by faith communities in interdisciplinary dialogue—with those outside of their faith—regarding complex societal problems.[49] Muller encourages Christians to adopt “[d]oubt as a leading metaphor (not-knowing position)” to imagine alternative stories. He argues, “If theology can retire from the task of defending God, or rather a theistic understanding of God, and ask real research questions with the other disciplines, it can participate in a meaningful way at the interdisciplinary table.”[50] He continues:

Theologians are often perceived as the champions of certainty and belief. But the truth is that the more you dwell in the vicinity of the ultimate questions of life, which is per definition the task of the theologian, the more likely you are to become disoriented. Such disorientation, however, is a prerequisite for the reaching of re-orientation (Brueggemann). But this re-orientation is not the same as regaining old certainties. It is rather finding assurance in the creation of a new identity. This implies a new role for theologians at the interdisciplinary table—no longer as the guardians of religious tradition, but as the ones who can formulate on the one hand the value of the traditions of interpretation but at the same time express doubts about those interpretations.[51]

Even if doubt is painful, difficult, or disorienting, it remains an essential part of the process of orienting oneself in spaces of uncertainty. The pervasive and persistent discomfort with doubt and uncertainty among many who claim Christian worldviews thus impedes their ability to participate meaningfully in discussions regarding topics such as the governance of AI, that require comfort and proficiency in accepting and navigating doubt, uncertainty, and ambiguity. Moreover, faith, by definition, cannot be coerced. Coercion, or the absence of any possibility of doubt, delegitimises faith, by rendering it either moot or inauthentic. So, while doubt has often been seen as that which opposes or precludes faith, the presence of doubt may in fact enable, invigorate, and legitimise it. Faith and doubt both occupy the ground between the known and unknown.

It is therefore remarkable that many elements of the Christian church and establishment in nineteenth century England, where Darwin’s theories unfolded, attempted to contest science on grounds of certainty, rather than uncertainty. Whereas scientists and secularists by and large claimed that their views were rooted in doubt until proven certain, Christian opponents argued that their faith was certain, and foreswore doubt and uncertainty altogether, even though most of the objects of their faith resided in an entirely unseen and spiritual domain. A stronger rhetorical, logical, and even theological stance might have been to contest or rather welcome science on grounds of uncertainty, as a counterpart in tasks of discovery. If Christian opponents had argued that they were experts in areas of the unseen or unproven, and that faith and doubt were their principal modes for exploring uncertainty, they might have met science and secularism on more conciliatory and reasonable grounds. Not only that, but they would have been wise to recognise that the very essence and practice of the faith they professed was only possible within domains of uncertainty.

Morgan’s work, as well as commentary on it, suggest that there was a transition toward propositional and cognitive faith or the interiority of faith that did not emerge until the second through fifth centuries, suggesting that the nineteenth-century understanding of faith as an unwavering acceptance of certain tenets was an unnecessarily ossified and inflexible view that did not characterise all of Christian history up to that point.[52] Other views were possible, while still remaining well within the bounds of Christian life and theology. However, in attempting to cast the unseen or not-yet-seen as certain and to stamp out doubt, faith in any active sense of the term died. In its place, many elements of English and American Christian culture attempted—through sermons, speeches, essays, and votes—to erect an edifice to safeguard the wrong ground. Christianity had not been displaced as a guide in processes of discovery. To the extent that it was displaced, it displaced itself. By isolating itself from doubt and uncertainty, it exiled itself also from relationship with the wider world. And science and secularism, not faith, became the new guardians of wonder, of mystery, of the unknown and unseen, of worlds beyond and worlds within.

Relationship as Essential to Navigating Uncertainty

And yet, it is relationship that enables us to navigate uncertainty. While the tone and words used in nineteenth-century debates over mind, matter, and morality may appear to be tangential to the philosophical, theological, and scientific subjects they debated, in fact, the real question and indeed the answers they sought both inhered in and were lived out, or rather snuffed out, as they drew ever-deepening lines between themselves and their opponents. Like rivers cutting canyons, their biting words traced narrative lines over and over, carving chasms in the cultural landscape. That landscape is the legacy those generations left. While a significant amount of ink has been spilled over the past century and a half debating whether there is a theoretical, theological, philosophical, or historical conflict between science and faith, this question cannot be strictly or sufficiently dealt with in the abstract. The tragic fact will always remain that in the late nineteenth century, at a critical moment in history, through words, debates, purges, and power struggles, these societies constructed a conflict, a rift between relationships, where none need have existed, and where for the most part, none had existed before.[53] That rift remains.[54]

If the Christian notion of faith or πίστις first rested in its role in constructing relationships as Morgan has argued, it is ironic and tragic that the evangelical church has handled that central task so badly. Indeed, it would seem that if relationship, not creed, is at the heart of faith, then it has failed in this respect, and indeed has come close—at least among some of the more conservative branches of the Christian church and church scholarship—to forfeiting the privileges that relationship supports, not only of being trusted by those outside its walls to listen to their views, including their doubts and criticisms, but the privilege of being listened to, as well. In foreclosing doubt and making faith the province of certainty, many branches of Christianity have foreclosed conversation. This social conflict, even if unnecessary, rooted in poor translations and misunderstandings, and even if only in increasing measure for the past century and a half, is still adversely affecting society’s attempts to develop feasible and ethical approaches to the world’s most serious challenges, such as the development, use, and governance of artificial intelligence.

If we do not wish to simply repeat, with the development and governance of AI, the same ineffective path followed in the late nineteenth and early twentieth centuries, with the potential for the same genocidal ends, then instead of redrawing lines between artificial and natural existence, and instead of attempting to locate the origins and content of “mind” or “intent” in humans or AI, we might reorient the quest for just ends around relationship, which must include making room for doubt, and for those who are good at doubting.

New technologies, such as artificial intelligence, bring with them several uncertainties. First, there is uncertainty about how humans are developing, deploying, or using AI. They may do so in ways that may disproportionately harm large segments of society, while benefitting others, but many of their decisions and actions cannot be definitively seen or known. Second, there is uncertainty about how AI makes decisions. Third, there are additional uncertainties regarding what constitutes consciousness which tie together questions of who or what “counts” as a legal person (a corporation, an embryo, a foetus, an animal, an algorithm, a cyborg) with claims to rights and protections, and whether AI can or will arrive at a level of capability or consciousness that can justify its inclusion in this category. Fourth, all three of these types of uncertainties lead to uncertainty about what sort of revised legal frameworks could be devised under which AI would be legible, and how we might augment current systems for human and corporate law with a revised framework for governing other types of consciousness or intelligence.[55] There are strong reasons to see these uncertainties as a threat to governance, or to working toward ethical or moral responses in law.

However, it is also possible that these uncertainties offer an opportunity. The uncertainties inherent in science and technologies that have advanced over recent decades, including AI, provide a fresh opportunity for people of faith to reposition themselves as those who are not, as Muller called them, “guardians of religious tradition.”[56] Instead, they might follow Catherine Keller’s suggestion to “apply to theology, perversely, this antitheological mandate of Bertrand Russell: ‘To teach how to live without certainty, and yet without being paralysed by hesitation’.”[57] Her proposal of faith as “hypothesis” as well as her emphasis on relationship may be useful tools alongside doubt for orienting within the profound new spaces of uncertainty that have opened up through fields such as AI, neurotechnology, and quantum mechanics. There is a need to make a place within faith for those who doubt, not only in the more hopeful or committed sense as articulated by Keller, but doubt in all shades.

Many are “disenfranchised” from their family and faith communities by their doubt, which, incidentally, is exactly what occurred to Charles Bradlaugh. After observing discrepancies between the Gospels and the Thirty-Nine Articles of the Anglican Church while teaching Sunday school as a young teenager, his priest suspended him from teaching, and ultimately enlisted his employers, who also employed his father, in threatening him with the loss of his job if he would not recant his doubts. Placing him in a moral dilemma, the young Bradlaugh chose to stand by his “honest doubt” and left both his job and his home. These exchanges set events in motion, as Bradlaugh, who acquired “‘an almost obsessive hatred of Christianity,’ directed Secularism into a brash militant force, intent on exposing the obvious and demonstrable errors of fact in religious claims.”[58] Many late-nineteenth century texts on connections between science and religion were fond of including the quote on “honest doubt” from Tennyson’s famous poem, as they attempted to make room for doubt by suggesting that a faith untested by doubt was less real, less strong, and not thoroughly one’s own.[59] Bradlaugh attempted initially to knit a narrative and a community where doubt and faith could coexist meaningfully within domains of uncertainty. When this vision was harshly rejected, Bradlaugh and his followers formed new narratives and communities of their own, based on doubt.[60]

As Bradlaugh’s case demonstrates, doubt can occur as a part of “coming of age,” thinking through various texts or beliefs, or tragedy or betrayal that may distort or shatter one’s worldview, specifically, the belief narratives that make sense of injustice.[61] Doubt, regardless of its origin, is a necessary part of continually building and rebuilding what one believes to be true about the world. Though doubt itself is often triggered or accompanied by loss, or may be experienced as a form of loss of belief, the loss is amplified by the further loss of disenfranchisement from one’s community at the very moment when what is needed is a community that will journey through the doubt and loss together as a path towards reconstructing a coherent narrative about the world. Doubt, as much as faith, is an invitation to relationship. And as Bradlaugh’s case further demonstrates, whether and how these invitations to relationship are received by those in the relevant community have had, and still have, intense and far-reaching consequences for society.

Although AI may appear to present a claim or promise of ever-increasing knowing at unimaginable scales, this claim conflates predictability with knowledge. That is, like faith, predictability, in a statistical sense, is generally only useful in domains of uncertainty. Where all is known, no prediction is necessary. AI guesses, but it never knows. It may predict statistical relationships between dependent and independent variables, allowing it to guess which words it should “say” or communicate to mimic human speech, or which job applicants are most likely to be of interest to an employer, or which individuals awaiting trial are more or less likely to commit new crimes if released on bail. But it will never know for sure. And in the process of guessing, it will often inflict intense harm on those to whom these statistical “guesses” are applied, that is, AI is frequently wrong in ways that privilege some while intensifying the suffering of others. AI is being increasingly used in domains of uncertainty, such as the examples above, that have opened up through gradual erosion of human relationship and community. And the more AI is deployed into these complex social contexts, it further erodes the relationships and communities that could help to contend with uncertainty and harm caused by AI.

It is often assumed that the role of religious communities and people of faith in addressing technological and cultural change is to serve as a sort of ethical ballast, so that older ideals and values will not be lost. Such ideals and values are presumed, by each group promoting them, to be good. Sometimes, they may be, but this approach does not always have the influence hoped for, and sometimes creates or contributes to new problems as new technologies and possibilities unfold.[62] If by “people of faith” we mean those who adhere to and insist that particular set of “beliefs” must be true, then all that is left for them to do is attempt to be society’s ethical ballast, even though much of society itself does not welcome these efforts.

But what if, by “people of faith” we mean those who are adept at navigating domains of uncertainty, at responding patiently, humbly, creatively, and honestly to the relational invitations that arise from doubt and the unknown, and thereby forge communities that build narratives that do not break in the face of uncertainty? Like AI’s predictions, our narratives help us cope with the uncertainties of the past, present, and future. What Kirk Wegter-McNelly states regarding hypotheses applies to the larger narratives we dwell within:

We inhabit our more consequential and fundamental guesses just as animals inhabit their nests: we leverage them as places of felt order and safety from which we can venture out and attempt further understanding. In the existential arena, hypotheses shield us from the ever threatening chaos and randomness of existence.[63]

We must build new narratives to navigate the uncertainties that loom over us in the development, use, and governance of AI. Our narratives, and our relational ability to build them, must be capable of weathering the uncertainty of larger questions, for instance, about the nature of humanity. This might be a moment for softening, for “sidestepping … the grumpy certitude of various self-indulgent orthodox theologies.”[64] It might be possible, this time, to approach things—and one another—differently than nineteenth-century Christian societies did when they encountered what to many was the terrifying uncertainty of a (human) being untethered from the “soul” and with it, moral and legal structures. Those coming from a Christian perspective now might embrace doubt and faith as invitations to relationship and community, all of which are tools for imagining new narratives and devices of law and justice that can accommodate all kinds of minds.

Conclusion

It may be that the pressing issues of artificial intelligence are now forcing a reevaluation and a return to the unfinished work of updating the legal system to account for artificial entities, including our future selves, but also the much harder work of learning how to talk to one another about these questions. The contentious and stinging divides of “creation” or “artificiality” and the “natural” world did not serve nineteenth-century societies well in the past. Their most strenuous pronouncements against new scientific knowledge for its potential to break free of legal and moral boundaries did nothing to prevent or even slow the chilling descent into eugenics and genocide in the twentieth century. And such approaches do as little for societies today.

There are steep costs in creating and defending false dichotomies. The divide between “artificial” and “natural” is proving to be unhelpful and meaningless now as society attempts to draw boundaries between where the “human” ends and “AI” begins, and in fact, attempting to assert these boundaries and frame law and discourse within them may only boomerang to undercut principles of justice. Theoretical and social divides derail meaningful discussion and the ability to disagree well—in ways that preserve relationships rather than ruin them. These rifts will only further delay our development of more reasonable, workable, and ultimately just methods of governance. Ironically, or perhaps predictably, treating others as less than human due to the “objectionable” views they hold may in fact parallel and fuel the very dehumanisation of humanity by AI, technology, capitalism, or culture, which many hope to prevent.[65] More than living with uncertainty, we must learn to live with one another.

 

 

Acknowledgment:

The author thanks the editors, Marius Dorobantu and Fraser Watts, as well as Ryan Burnett, Doru Costache, Andrew Jackson, and Harris Wiseman for their detailed reading and discussion of the paper.

The author reports there are no competing interests to declare.

Received: 24/02/24 Accepted: 11/05/24 Published: 18/09/24

 


[1] See Yavar Bathaee, “The Artificial Intelligence Black Box and the Failure of Intent and Causation,” Harvard Journal of Law and Technology 31:2 (2018): 889–938; Jean-François Bonnefon et al., “The Moral Psychology of Artificial Intelligence,” Annual Review of Psychology 75:1 (2024): 653–675, https://doi.org/10.1146/annurev-psych-030123-113559; Joris Graff, “Moral Sensitivity and the Limits of Artificial Moral Agents,” Ethics and Information Technology 26:1 (2024): 13, https://doi.org/10.1007/s10676-024-09755-9; Martin Miernicki and Irene Ng, “Artificial Intelligence and Moral Rights,” AI & Society 36:1 (March 2021): 319–329, https://doi.org/10.1007/s00146-020-01027-6; Julian Savulescu and Hannah Maslen, “Moral Enhancement and Artificial Intelligence: Moral AI?” in Beyond Artificial Intelligence, ed. Jan Romportl et al., Topics in Intelligent Engineering and Informatics 9 (Cham: Springer International Publishing, 2015), 79–95, https://doi.org/10.1007/978-3-319-09668-1_6.

[2] A. Bonezzi et al., “The Human Black-Box: The Illusion of Understanding Human Better Than Algorithmic Decision-Making,” Journal of Experimental Psychology: General 151:9 (2022): 2250–2258, https://doi.org/10.1037/xge0001181.

[3] Timothy L. O’Brien and Shiri Noy, “Traditional, Modern, and Post-Secular Perspectives on Science and Religion in the United States,” American Sociological Review 80:1 (2015): 92–115, https://doi.org/10.1177/0003122414558919.

[4] See Carole M. Cusack, “The End of the Human? The Cyborg Past and Present,” Sydney Studies in Religion, special issue: The Dark Side (2004): 223–234, https://openjournals.library.sydney.edu.au/SSR/article/view/213; Chris Hables Gray, Cyborg Citizen: Politics in the Posthuman Age (London and New York: Routledge, 2014), https://doi.org/10.4324/9780203949351; N. Katherine Hayles, How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics (Chicago, IL: University of Chicago Press, 1999), https://doi.org/10.7208/chicago/9780226321394.001.0001; Jelena Guga, “Cyborg Tales: The Reinvention of the Human in the Information Age,” in Beyond Artificial Intelligence, ed. Jan Romportl et al., Topics in Intelligent Engineering and Informatics 9 (Cham: Springer International Publishing, 2015), 45–62, https://doi.org/10.1007/978-3-319-09668-1_4; Donna Haraway, Simians, Cyborgs, and Women: The Reinvention of Nature (New York: Routledge, 1990), https://doi.org/10.4324/9780203873106; Aleksandra Łukaszewicz Alcaraz, Are Cyborgs Persons? An Account of Futurist Ethics (Cham: Springer International Publishing, 2021), https://doi.org/10.1007/978-3-030-60315-1.

[5] Both terms are used here to acknowledge the contentiousness of terminology, where abortion rights advocates and abortion opponents insist on one term and eschew the other in their rhetoric.

[6] See David Schlosberg, Defining Environmental Justice: Theories, Movements, and Nature (Oxford: Oxford University Press, 2007), https://doi.org/10.1093/acprof:oso/9780199286294.001.0001; Stefan Lorenz Sorgner, We Have Always Been Cyborgs: Digital Data, Gene Technologies and an Ethics of Transhumanism (Bristol: Bristol University Press, 2021), https://doi.org/10.46692/9781529219234. See also The Cyborg Foundation: https://tinyurl.com/ypfxaax8 (accessed 15 June 2023).

[7] John C. Coffee, “‘No Soul to Damn: No Body to Kick’: An Unscandalized Inquiry into the Problem of Corporate Punishment,” Michigan Law Review 79:3 (1981): 386, https://doi.org/10.2307/1288201; Alison Cronin, Corporate Criminality and Liability for Fraud (Abingdon and New York: Routledge, 2018), https://doi.org/10.4324/9781315179605.

[8] Throughout this paper, the term “we” will refer loosely to humans—not to gloss debates such as those just raised that humans are in many ways hard to define and not so human as we might imagine, nor to suggest that all humans agree on these perspectives, but for the purposes of this discussion, to distinguish humans from more substantially or tentatively nonhuman entities or agents such as corporations and AI.

[9] See Joanna J. Bryson et al., “Of, For, and By the People: The Legal Lacuna of Synthetic Persons,” Artificial Intelligence and Law 25:3 (2017): 273–291, https://doi.org/10.1007/s10506-017-9214-9; Robert Van Den Hoven Van Genderen, “Do We Need New Legal Personhood in the Age of Robots and AI?” in Robotics, AI and the Future of Law, ed. Marcelo Corrales et al., Perspectives in Law, Business, and Innovation (Singapore: Springer Singapore, 2018), 15–55, https://doi.org/10.1007/978-981-13-2874-9_2; Lawrence Solum, “Legal Personhood for Artificial Intelligences,” North Carolina Law Review 70:4 (1992): 1231.

[10] Guga, “Cyborg Tales.”

[11] See Eugene J. Chesney, “The Concept of Mens Rea in the Criminal Law,” Journal of Criminal Law and Criminology (1931-1951) 29:5 (1939): 627, https://doi.org/10.2307/1136853; Paul Robinson, “A Brief History of Distinctions in Criminal Culpability,” Hastings Law Journal (1980), https://scholarship.law.upenn.edu/faculty_scholarship/631.

[12] See Peter M. Asaro, “A Body to Kick, but Still No Soul to Damn: Legal Perspectives on Robotics,” in Robot Ethics: The Ethical and Social Implications of Robotics (Cambridge, MA: MIT Press, 2012), 169–186; Bathaee, “The Artificial Intelligence Black Box”; M. A. Lemley and B. Casey, “Remedies for Robots,” The University of Chicago Law Review 86:5 (2019): 1311–1396; Omri Rachum-Twaig, “Whose Robot Is It Anyway? Liability for Artificial-Intelligence-Based Robots,” University of Illinois Law Review 2020:4 (2020): 1141–1176.

[13] O’Brien and Noy, “Traditional, Modern, and Post-Secular Perspectives.”

[14] Dylan Walsh, “Would You Sell Your Extra Kidney?” Wired, https://www.wired.com/story/kidney-donor-compensation-market/ (accessed 29 January 2023).

[15] There are other arguments that could be made from both religious and secular perspectives about the origins of morality or whether AI has/can have moral capability, but this article focuses on the assumptions made in the nineteenth century and now, that a being’s “naturalness” or “createdness” or “artificiality” are essential factors in whether that being can be morally functional or held morally accountable.

[16] See Jonathan Marks, “Why Be against Darwin? Creationism, Racism, and the Roots of Anthropology,” American Journal of Physical Anthropology 149:S55 (2012): 95–104, https://doi.org/10.1002/ajpa.22163; Heidi Rimke and Alan Hunt, “From Sinners to Degenerates: The Medicalization of Morality in the 19th Century,” History of the Human Sciences 15:1 (2002): 59–88, https://doi.org/10.1177/0952695102015001073.

[17] James Rachels, Created from Animals: The Moral Implications of Darwinism (New York, NY: Oxford University Press, 1990), https://doi.org/10.1093/oso/9780192177759.001.0001.

[18] See T. H. Huxley, Man’s Place in Nature (London: Watts and Company, 1913); George J. Romanes, Mental Evolution in Man: Origin of Human Faculty (London: Kegan Paul, Trench and Co., 1888).

[19] John H. Carter, The Voice of the Past; Written in Defence of Christianity and the Constitution of England, with Suggestions on the Probable Progress of Society, and Observations on the Resurrection of the Body; Being a Reply to the Manifesto of Mr. Robert Owen (London: S. Horsey, 1840). These questions further complicated Enlightenment debates positing “the rational man” who operated on the basis of free will as the foundation of civil society. In the period leading up to Darwin’s publications there had been an increased focus on intent and state of mind in culture and criminal law. Individuals were increasingly seen as rational subjects who were responsible for restraining their passions, not just in their behaviour, but in their minds. This increased focus on rationality and intent in some ways set the stage for a shift to seeing humans as natural. But for many in England and the US the notion of individual moral responsibility was inseparable from the belief that humans were “made in the image of God.” See Susanna L. Blumenthal, Law and the Modern Mind: Consciousness and Responsibility in American Legal Culture (Cambridge, MA: Harvard University Press, 2016), https://doi.org/10.4159/9780674495517; Martin J. Wiener, Reconstructing the Criminal: Culture, Law, and Policy in England, 1830-1914 (Cambridge: Cambridge University Press, 1990).

[20] See Blumenthal, Law and the Modern Mind; Rimke and Hunt, “From Sinners to Degenerates”; Wiener, Reconstructing the Criminal.

[21] See “House Of Commons, Monday, July 4,” Times, July 5, 1881, The Times Digital Archive; “House Of Commons, Tuesday, April 3,” Times, April 4, 1883, The Times Digital Archive.

[22] Walter L. Arnstein, “The Bradlaugh Case: A Reappraisal,” Journal of the History of Ideas 18:2 (1957): 254, https://doi.org/10.2307/2707628.

[23] This is not the American Mormon William Henry Chamberlin, who would have been only about twelve years old at the time of these writings. It is possible that this was the same William Chamberlin who lived on a local estate in Adderbury, and who was commonly referred to in small references and advertisements as though a man of some importance or standing. Further archival work, outside the scope of this paper, might be needed to confirm this identity or locate any other similar writings by the author. See Banbury Historical Society, Cake and Cockhorse, Autumn/Winter 2016, https://banburyhistoricalsociety.org/uploads/pdf/20/20-04.pdf; “Gun Licenses,” Times, July 4, 1879, The Times Digital Archive.

[24] William Chamberlin, “The Attitude of the Christian Church Towards Atheism: A Lecture Delivered before the Church of England Young Men’s Society,” in The Champion of the Faith Against Current Infidelity, ed. James McCann (London: Wade & Company, 1883). See Timothy Larsen, Crisis of Doubt: Honest Faith in Nineteenth-Century England (Oxford: Oxford University Press, 2006), https://doi.org/10.1093/acprof:oso/9780199287871.001.0001; Stuart Mathieson, “The Victoria Institute 1865–1932: A Case Study in the Relationship between Science and Religion” (Belfast: Queen’s University Belfast, 2018), https://tinyurl.com/2jp8e39p; J. McGrigor Allan, “Soul and Body: A Metaphysical Essay,” in The Champion of the Faith Against Current Infidelity, ed. James McCann (London: Wade & Company, 1882).

[25] Michael D Barton, “Quote-Mining: An Old Anti-Evolutionist Strategy,” Reports of the National Center for Science Education 30:6 (2010), https://ncse.ngo/quote-mining-old-anti-evolutionist-strategy.

[26] Bathaee, “The Artificial Intelligence Black Box.”

[27] Blumenthal, Law and the Modern Mind. These debates over how much or whether humans are capable of developing intent are vigorously debated even today, with recent challenges based on claims that neuroscience has proven that intent and therefore culpability are illusions. For a good summary of the debates and an argument that challenges these views, see Stephen J. Morse, “Internal and External Challenges to Culpability,” Arizona State Law Journal 53:2 (2021): 617–654.

[28] See Ngaire Naffine, Law’s Meaning of Life: Philosophy, Religion, Darwin and the Legal Person (Oxford and Portland: Hart Publishing, 2009), 100–119 (chapter “The Divine Spark: The Principle of Human Sanctity”), https://doi.org/10.5040/9781472564658.

[29] See Talal Asad, “Notes on Body Pain and Truth in Medieval Christian Ritual,” Economy and Society 12:3 (1983): 287–327, https://doi.org/10.1080/03085148300000022; Yasha Renner, “Alien Ethics: Testing the Limits of Absolute Liability,” Liberty University Law Review 7:3 (2016), https://digitalcommons.liberty.edu/lu_law_review/vol7/iss3/5; Edward Joseph White, Legal Antiquities: A Collection of Essays Upon Ancient Laws and Customs (St Luis, MO: F. H. Thomas Law Book Company, 1913), see chapter “Trial by Ordeal.”

[30] Again, there is much research and debate on this issue, but the assumption that humans can form intent and AI cannot currently prevails in law.

[31] See Bonnefon et al., “The Moral Psychology of Artificial Intelligence”; Brian Christian, The Alignment Problem: Machine Learning and Human Values (New York: W. W. Norton & Company, 2020); Michael Kearns and Aaron Roth, The Ethical Algorithm: The Science of Socially Aware Algorithm Design (Oxford and New York: Oxford University Press, 2019).

[32] Rachum-Twaig, “Whose Robot Is It Anyway?”

[33] One good example of this problem is the Uber driver who hit and killed a pedestrian in Arizona. Lauren Smiley, “The Legal Saga of Uber’s Fatal Self-Driving Car Crash Is Over,” Wired, https://tinyurl.com/mrx5b992 (accessed 17 July 2024). Civil cases are pending in the deaths of accident victims of self-driving cars. These cases will continue putting these liability frameworks to the test, and an investigation of Tesla led to a recent recall of more than two million vehicles. NHTSA, “Part 573 Safety Recall Report, 23V-838,” 12 December 2023, https://tinyurl.com/2mvdpw36.

[34] Emile Loza De Siles, “Soft Law for Unbiased and Nondiscriminatory Artificial Intelligence,” IEEE Technology and Society Magazine 40:4 (2021): 77–86, https://doi.org/10.1109/MTS.2021.3123729.

[35] Anat Lior, “Insuring AI: The Role of Insurance in Artificial Intelligence Regulation,” Harvard Journal of Law & Technology 35:2 (2022): 467–530.

[36] Mamata Swain, “Redesigning Crop Insurance for Coping with Climate Change,” Indian Journal of Applied Economics and Business 5:1 (2022): 107–128, https://doi.org/10.47509/IJAEB.2023.v05i01.06.

[37] Howard Kunreuther and Mark Pauly, “Neglecting Disaster: Why Don’t People Insure Against Large Losses?” Journal of Risk and Uncertainty 28:1 (2004): 5–21, https://doi.org/10.1023/B:RISK.0000009433.25126.87; Lena Kabeshita et al., “Pathways Framework Identifies Wildfire Impacts on Agriculture,” Nature Food 4:8 (2023): 664–672, https://doi.org/10.1038/s43016-023-00803-z.

[38] See Robert D. Chesler et al., “How Insurance Companies Defraud Their Policyholders, and What Courts and Legislators Should Do About It,” Journal of Emerging Issues in Litigation 3:3 (2023): 213–226; Nir Kshetri, “The Evolution of Cyber-Insurance Industry and Market: An Institutional Analysis,” Telecommunications Policy 44:8 (2020): 102007, https://doi.org/10.1016/j.telpol.2020.102007; Max Tesselaar et al., “Regional Inequalities in Flood Insurance Affordability and Uptake under Climate Change,” Sustainability 12:20 (2020): 8734, https://doi.org/10.3390/su12208734.

[39] See Nikola Banovic et al., “Being Trustworthy Is Not Enough: How Untrustworthy Artificial Intelligence (AI) Can Deceive the End-Users and Gain Their Trust,” Proceedings of the ACM on Human-Computer Interaction 7:CSCW1 (2023): 1–17, https://doi.org/10.1145/3579460; Shara Monteleone, “Addressing the ‘Failure’ of Informed Consent in Online Data Protection: Learning the Lessons from Behaviour-Aware Regulation,” Syracuse Journal of International Law and Commerce 43:1 (2015): 69–120; Nora Moran, “Illusion of Safety: How Consumers Underestimate Manipulation and Deception in Online (vs. Offline) Shopping Contexts,” Journal of Consumer Affairs 54:3 (2020): 890–911, https://doi.org/10.1111/joca.12313; Janice Tsai et al., “What’s It To You? A Survey of Online Privacy Concerns and Risks,” SSRN Electronic Journal, 2006, https://doi.org/10.2139/ssrn.941708.

[40] See Frank Cremer et al., “Cyber Exclusions: An Investigation into the Cyber Insurance Coverage Gap,” in 2022 Cyber Research Conference-Ireland (Cyber-RCI) (IEEE, 2022), 1–10, https://doi.org/10.1109/Cyber-RCI55324.2022.10032678; John N. Ellison et al., “Recent Developments in the Law Regarding the Absolute and Total Pollution Exclusions,” Environmental Claims Journal 13:4 (2001): 55–112; Kyle P. Konwlns and Olayinka Ope, “PFAS: The Impact of Forever Chemicals,” Brief 51:3 (2022); Charlie McCammon, “Insurers Will Likely Revisit ‘Nation State’ Cyber Exclusions after Court Ruling,” WTW (November 21, 2023), https://tinyurl.com/3bsd5bw5.; Joy Momin, “Navigating Ransomware Attacks in the United States,” TortSource 26:3 (2024): 21–23; Carla Ng et al., “Addressing Urgent Questions for PFAS in the 21st Century,” Environmental Science & Technology 55:19 (2021): 12755–12765; Gary Svirsky et al., “Current Trends in Application of the Absolute Pollution Exclusion in CGL Policies: Cross-Border Comparison between New York and Canadian Laws,” Journal of Environmental Law and Litigation 34 (2019): 97–110; Josephine Wolff, “The Role of Insurers in Shaping International Cyber-Security Norms about Cyber-War,” Contemporary Security Policy 45:1 (2024): 141–170, https://doi.org/10.1080/13523260.2023.2279033. See also Environmental Insurance Litigation: Law and Practice, Vol. 2 (2024); Superior Court of New Jersey Appellate Division, Docket No. A-1879-21, Merck and Co., Inc. & International Indemnity, Ltd. v. Ace American Insurance Company, et al. (2023).

[41] Rebecca Aviel et al., “From Gods to Google,” Yale Law Journal, Forthcoming 2024, https://doi.org/10.2139/ssrn.4742179.

[42] Edward Peters et al., “Trial by Fire and Water: The Medieval Judicial Ordeal,” The American Journal of Legal History 33:2 (1989): 158, https://doi.org/10.2307/845953.

[43] J. Kellenberger, “Three Models of Faith,” International Journal for Philosophy of Religion 12:4 (1981): 217–233, https://doi.org/10.1007/BF00137173.

[44] Matthew 9:22; Mark 5:34; Luke 7:50; Luke 18:42.

[45] Romans 8:24–25. See Kellenberger, “Three Models of Faith.”

[46] William Chamberlin, “Atheism Unpractical,” in The Shield of Faith, ed. George Sexton, vol. 7 (London: S. W. Partridge and Company, 1883).

[47] See B. Schliesser, “‘Abraham Did Not “Doubt” in Unbelief’ (Rom. 4:20): Faith, Doubt, and Dispute in Paul’s Letter to the Romans,” The Journal of Theological Studies 63:2 (2012): 492–522, https://doi.org/10.1093/jts/fls130. See also Bonnefon et al., “The Moral Psychology of Artificial Intelligence.”

[48] Teresa Morgan, Roman Faith and Christian Faith: Pistis and Fides in the Early Roman Empire and Early Churches (Oxford: Oxford University Press, 2015), https://doi.org/10.1093/acprof:oso/9780198724148.001.0001.

[49] See Hugh F. Crean, “Faith and Doubt in the Theology of Paul Tillich,” Bijdragen 36:2 (1975): 145–164, https://doi.org/10.1080/00062278.1975.10597056; Daniel Howard-Snyder and Daniel J. McKaughan, “The Problem of Faith and Reason,” in The Cambridge Handbook of Religious Epistemology, ed. Jonathan Fuqua et al. (Cambridge: Cambridge University Press, 2023), 96–114, https://doi.org/10.1017/9781009047180.009; Daniel Howard-Snyder and Daniel J. McKaughan, “Faith and Resilience,” International Journal for Philosophy of Religion 91:3 (2022): 205–241, https://doi.org/10.1007/s11153-021-09820-z; Morgan, Roman Faith and Christian Faith.; Julian C. Muller, “(Practical) Theology: A Story of Doubt and Imagination,” Verbum et Ecclesia 44:1 (2023), https://doi.org/10.4102/ve.v44i1.2650; Schliesser, “‘Abraham Did Not “Doubt” in Unbelief’.”

[50] Muller, “(Practical) Theology.”

[51] Muller, “(Practical) Theology.”

[52] See Daniel J. McKaughan, “Cognitive Opacity and the Analysis of Faith: Acts of Faith Interiorized through a Glass Only Darkly,” Religious Studies 54:4 (2018): 576–585, https://doi.org/10.1017/S0034412517000440; Teresa J. Morgan, “Introduction to Roman Faith and Christian Faith,” Religious Studies 54:4 (2018): 563–68, https://doi.org/10.1017/S0034412517000427. See also Peter Harrison, The Territories of Science and Religion (Chicago, IL: University of Chicago Press, 2017).

[53] See D. Etienne De Villiers, “Do Christian and Secular Moralities Exclude One Another?” Verbum et Ecclesia 42:2 (2021), https://doi.org/10.4102/ve.v42i2.2308; Frank M. Turner, “The Victorian Conflict between Science and Religion: A Professional Dimension,” Isis 69:3 (1978): 356–376, https://doi.org/10.1086/352065.

[54] See Jeff Hardin et al., The Warfare between Science and Religion (Baltimore, MD: Johns Hopkins University Press, 2018), https://doi.org/10.56021/9781421426181; O’Brien and Noy, “Traditional, Modern, and Post-Secular Perspectives.”

[55] See Asaro, “A Body to Kick”; Bathaee, “The Artificial Intelligence Black Box”; Mark Lemley and Bryan Casey, “Remedies for Robots,” University of Chicago Law Review 86:5 (2019), https://chicagounbound.uchicago.edu/uclrev/vol86/iss5/3; Rachum-Twaig, “Whose Robot Is It Anyway?”.

[56] Muller, “(Practical) Theology.”

[57] Catherine Keller, Cloud of the Impossible: Negative Theology and Planetary Entanglement (New York: Columbia University Press, 2014), https://doi.org/10.7312/kell17114.

[58] See Adolphe S. Headingley, The Biography of Charles Bradlaugh, 2nd ed. (London: Freethought Publishing Company, 1883); Richard Kaczynski, Friendship in Doubt: Aleister Crowley, J. F. C. Fuller, Victor B. Neuburg, and British Agnosticism (New York: Oxford University Press, 2024), https://doi.org/10.1093/oso/9780197694008.001.0001; Bryan Niblett, Dare to Stand Alone: The Story of Charles Bradlaugh (Oxford: Kramedart Press, 2011); Edward Royle, Radicals, Secularists, and Republicans: Popular Freethought in Britain, 1866-1915 (Manchester and Totowa, NJ: Manchester University Press and Rowman and Littlefield, 1980).

[59] See Robert M. Ryan, “The Genealogy of Honest Doubt: F. D. Maurice and In Memoriam,” in The Critical Spirit and the Will to Believe, ed. David Jasper and T. R. Wright (London: Palgrave Macmillan UK, 1989), 120–130, https://doi.org/10.1007/978-1-349-20122-8_8; Alfred Lord Tennyson, “In Memoriam A. H. H. OBIIT MDCCCXXXIII: 96,” in Works of Alfred Lord Tennyson, ed. Karen Hodder (Ware, Hertfordshire, UK: Wordsworth Editions, 1994), 285–364, https://wordsworth-editions.com/book/works-of-alfred-lord-tennyson/; Saverio Tomaiuolo, “Faith and Doubt: Tennyson and Other Victorian Poets,” in Twenty-First Century Perspectives on Victorian Literature, ed. Laurence W. Mazzeno (Lanham, MD: Rowman & Littlefield Publisher, 2014), https://tinyurl.com/5amm4uxw.

[60] Kaczynski, Friendship in Doubt.

[61] See Beverly Flanigan, Forgiving the Unforgivable (New York: Wiley, 1992); James B. Gould, “A Pastoral Theology of Disenfranchised Doubt and Deconversion from Restrictive Religious Groups,” Journal of Pastoral Theology 31:1 (2021): 35–53, https://doi.org/10.1080/10649867.2020.1824172.

[62] Frank Pasquale, “Two Concepts of Immortality: Reframing Public Debate on Stem-Cell Research,” Yale Journal of Law & the Humanities 14:73 (2013), https://openyls.law.yale.edu/handle/20.500.13051/7319.

[63] Kirk Wegter‐McNelly, “Religious Hypotheses and the Apophatic, Relational Theology of Catherine Keller,” Zygon 51:3 (2016): 758–764, https://doi.org/10.1111/zygo.12266.

[64] Donovan O. Schaefer, “The Fault in Us: Ethics, Infinity, and Celestial Bodies,” Zygon 51:3 (2016): 783–796, https://doi.org/10.1111/zygo.12276.

[65] Emily M. Bender, “Resisting Dehumanization in the Age of ‘AI,’” Current Directions in Psychological Science 33, no. 2 (April 1, 2024): 114–20, https://doi.org/10.1177/09637214231217286. See also Nicole Dewandre, on the “relational self.” “The real culprit is not algorithms themselves, but the careless and automaton-like human implementers and managers who act along a conceptual framework according to which rationalisation and control is all that matters. More than the technologies, it is the belief that management is about control and monitoring that makes these environments properly in-human.” Nicole Dewandre, “Big Data: From Modern Fears to Enlightened and Vigilant Embrace of New Beginnings,” Big Data & Society 7, no. 2 (July 2020): 205395172093670, https://doi.org/10.1177/2053951720936708.