The Image and the Idol: A Theological Reflection on AI Bias

Bias is a rightly unpopular phenomenon in our day but, in one sense, we are all and must be “biased.” When we deliberate about an action—when we wish to select a course other than that suggested by impulse, instinct, and habit—we must capture and interpret reality; we must schematize it to make sense of and direct our action within and toward it. Even before such deliberation, interpretive reductions are also part of our biological makeup, e.g. in our selective sensitivities to the impinging environment, in how our nervous system makes stimuli to feel, and in our uniquely rational tendency to wonder about and to conceptualize the things that we experience.

Whether inborn, habitual, or carefully considered, our reductions can yield insight. We are revolted by the smell of rot, we tell children that some berries are edible and others are “only for birds,” we discuss “race” as both real and socially constructed, and we talk about degrees of “realism” in our accounts of how we know the world. On the other hand, our perceptual and conceptual specifications are undesirable “biases” when they are not so much selective of the scope of human life and action as they are distortive within it.

That I cannot (but bees can) see ultraviolet-reflecting markings on flowers[1] or that fire pains me are examples of selectivity. A three-month old’s greater comfort with own-race faces (radiating from an innocuous preference for the primary caregiver’s face),[2] or a child’s expectation that a secretary will be female, are bounds that ought to be expanded by rearing lest they become biases governing one’s life, because what is beneficial to family bonding and statistically accurate in waiting rooms is distortive when it becomes a conscious or unconscious norm governing one’s societal experience and behavior.

The tension between how our images of reality make it navigable and how they may distort it is markedly apparent in the phenomenon of AI bias. Contemporary machine learning techniques—especially the “deep neural networks” that dominate today’s landscape—have yielded AI systems that, despite astounding successes, are notorious for their biases.[3] Especially in the areas of race and gender, these biases range from the jarring (e.g. Barack Obama’s pixelated face is “reconstructed,” i.e. confabulated in high resolution, as Caucasian)[4] and the troubling (e.g. Amazon’s face recognition software, marketed to law enforcement agencies, was 31% less effective at determining the gender of women of color than of light-skinned men),[5] to the life-changing (e.g. Amazon’s experimental hiring algorithm discriminated against women)[6] and even the frightening (e.g. prison sentences are influenced by racially correlated inaccuracies in the COMPAS algorithm’s prediction of recidivism;[7] a hospital’s software for predicting care needs consistently deprioritized equally sick Black patients).[8]

Biases can arise from the dataset on which the AIs are trained, the architecture of the systems, human interpretation of their results, and combinations of such factors. Each etiology calls for a unique solution. However, the general predicament can be illuminated as it applies to our own ethical lives when it is set within early Christian discourse on the “idol.” For this, I look especially to North African theologian and bishop Augustine of Hippo (lived 354–430).

An Augustinian reflection on AI bias suggests three claims: First, technology and idolatry become intertwined when technology is extended by humans toward an unfettered domination of the world. Second, contemporary AI’s great success—the deep neural network—is open to idolatrous misuse at several levels—both because it is an image that can replace reality and because it somewhat echoes the character of the idolatrous mind. Third, Augustine’s solution to idolatry—God’s self-revealing incarnation as one of us—suggests that optimistic technological imaginaries of an AI-driven future are essentially inadequate to the needs of human life.

Technology and Idolatry

Tools function to extend and to facilitate the human will, working upon the world. There is nothing inherently evil in this. Evil, Augustine would tell us, is located, rather, in the potential choice to reduce all things to the role of fulfilling one’s own desires. The tool employed for this end becomes an idol.

Augustine calls this totalizing instrumentalization “pride” (Latin, superbia)—not a healthy regard for one’s accomplishments, but a preference for domination rather than self-gift. He explains that superbia originates when “the soul abandons [God]” as its highest aspiration and seeks to become “its own satisfaction,” “a kind of end to itself.”[9] Our personhood is constituted by God the creator; and in relationship with him we will find our final fulfillment. To be one’s own satisfaction, then, one must escape one’s need for relationships with others and, ultimately, with God.[10] To do this, superbia suborns all things to oneself by judging them according to their satisfaction of one’s own desires. Therefore, “more is often given for a horse than for a servant, for a jewel than for a maid,” because “the necessity of the needy or the desire of the pleasure-seeker . . . does not consider a thing’s value in itself” but rather “how it meets one’s need” or “pleasantly titillates the bodily sense.”[11]

Superbia extends this self-centered evaluation of the world into an illusory domination of it by “idolatry.” For Augustine, idolatry does not fundamentally mean offering incense before graven images, although one might do this.[12] Rather, the idolater replaces the true God with some lower reality—a reality that can be comprehended within the idolater’s own horizon of valuation and power. And since this reality is controlled by the idolater, the idolater is thus covertly set at the pinnacle of all hierarchies, accomplishing total domination by ignoring all that cannot be controlled through the idol. The Babylonians sacrificed to their statues to gain harvest-bringing storms and peace-bringing victories; Ebenezer Scrooge set money as his horizon and was self-blinded to all that money could not buy.

Both cases are idolatrous, because both attempt to deny one’s need for God by positioning oneself as master of the levers of what really defines the universe. It is human to navigate reality in light of images, schemas, and devices that are oriented to action within reality. These tools become idols when superbia clings to them in place of reality, denying all that cannot be compassed within their horizon of apprehension and control.[13]

Deep Neural Networks as Image and as Idol

The most prominent tool of contemporary artificial intelligence is the deep neural network. Elsewhere, I wrote:

Artificial neural networks receive a pattern of information through input nodes, which are connected with various strength to layer upon layer of further nodes. At each particular node, when the sum of the incoming connections exceeds some pre-set threshold, that node will fire and its own signal will be transmitted variously to nodes on a further layer, and so on. If you put in a pattern at the beginning, it is transformed as its elements are recombined and processed until something else comes out on final layer of the network. A network can be “trained” to produce the desired behavior by adjusting the strengths of its connections, thus adjusting the contribution made by each node to each recombination and, in due course, to the final result.

Then I offer an example:

A piano offers a poor analogy but a useful image. If you have ever shouted into a piano with its sustaining pedal held down, then you have heard its tuned strings resonate with the different frequencies of your shout. One receives back a sort of echo, not of one’s words but of the tones of one’s voice. Similarly, as a neural network is tuned (i.e. as its connection strengths are adjusted), it begins to resonate with the entangled relations that are implicit in our world, including relations that cannot easily be discerned or logically represented by human investigators. But by its training, the network does not just echo, but transforms its input in order to make explicit the relations that are of interest to the trainer.

The deep neural network is co-called because it has not two or three but dozens or more layers with perhaps millions of connections. Such a network, when trained, becomes a sort of image. As AI researcher Kate Crawford explains, it predictively generalizes, performing an “induction” by “learning from specific examples . . . which data points to look for in new examples.” One might hope that, given a rich enough set of examples, the trained network could accommodate the messiness of the real world without reductive bias. As noted above, this has not proven to be the case. Instead, deep neural networks often manifest “systematic or consistently reproduced classification error[s] . . . when presented with new examples.”[14] This is bias. Whatever the particulars of this or that case, Augustine’s theological analysis discloses deeper principles of bias in three potentially idolatrous moments: human collection of training data; human interpretation of network activity; and the network’s echoing of the human mind.

Idolatrous Training: The Reduction of the World in Training Data

First, idolatry is seeded in the collection of training data. “Classifications,” writes Crawford, “are technologies that [both] produce and limit ways of knowing, and they are built into the logics of AI.”[15] In presuming to reduce the world accurately to the terms definable within a dataset, system designers exercise a “power to decide which differences make a difference.”[16] Often and without acknowledgment, these decisions “flatten complex social, cultural, political, and historical relations into quantifiable entities”[17] that may or may not represent them accurately. On the other hand, to avoid such choices, designers may train systems on numerous measurements gathered by countless devices across multitudinous interactions, in the hope that, discovering the deep harmonics of this “big data,” the network will have laid hold of the reality of importance. But here, too, we choose to believe that what can be measured is all that need be measured, allowing the “affordances of [our] tools [to] become the horizon of truth”[18] that determines our action in the real world.

These approaches are fraught with idolatrous potential because the supposedly context-free human is made master over the domain in question. By our focus on “big data” as a self-constituted and suitable stand-in for the world, we deny our role in choosing it; we seek independence from needing to have a point of view; and so we practice the potentially prideful illusion of mastery, with data as a proxy for our own ultimacy.[19] We deny our dependence upon and foundation in a larger reality; we deny our limitations by making ourselves arbiters of the relevant data-space.

Idolatrous interpretation: Computation and Output of Neural Networks 

Idolatry beckons also in that the “aboutness” of machine learning depends on our intentionality. A series of voltage levels on microscopic transistors does not supply its own conceptual interpretation; we supply it by framing the device’s activities in our own conceptual space, just as we do for the smudges that we call words in the cellulose aggregates that we call books.[20] The machine learning system maps data inputs to predictive or action-selecting outputs in light of our purposes—but are those purposes honored by the input data and the way that we have trained the outputs? Is our conceptual framing of the network supported by its activity?

Neural networks are “opaque” in that their interior sensitivities not wholly interpretable.[21] Computer scientist Peter Norvig argues that, at best, their statistical attunement “describes what does happen” but—“mak[ing] no claim to correspond to the generative process used by nature”—it “doesn’t answer the question of why[22] it happens.

Therefore, the network cannot signal whether its apparent success actually engages the phenomenon of interest directly, or instead latches on to some correlated but non-determinative variable. For example, a network that takes in prisoners’ data points and outputs their chance of recidivism, may appear to reach significant accuracy across a convict population. However, the network may simply have learned to prioritize convicts from high-crime zip-codes—like the system that flagged as bone fractures all x-rays that happened to come from the hospital with the highest proportion of trauma cases. Both methods, despite putatively high “accuracy” across a population, are unjust and hardly meaningful when applied to individuals.

Alternatively, the network may not be accurate at all—but to recognize its bias may take time. The system that de-prioritized Black hospital patients did so because they had fewer health expenditures—but health expenditures are a useful proxy for sickliness only if sickly persons have access to healthcare—and many African Americans do not.[23] Only the long-term failure of care outcomes and the racial correlation with care allocation will reveal the problem.

In the physical world, a mangled steak quickly proves the dullness of the carving knife; but a black box that seems to perform can be followed blindly because we have labeled the outputs according to our intended horizon of control, in service of which we expect the system to generate a good mapping—whether or not that mapping can actually be achieved. Unable to see immediately whether or not the emperor has clothes, we allow the network to idolatrously replace reality because it gives us the appearance of control over dynamics that we do not understand.

The Neural Network as Idolatrous Mind

Third, the trained network itself resembles Augustine’s analysis of the idolatrous mind. For Augustine—and as modern psychology and philosophy affirm—every act of understanding is shaped by one’s desires and the movements of will. For Augustine, we begin by apprehending something through the senses; then—implicitly or explicitly—we judge it as good (i.e. as real)[24] by clinging to it in some way with our approbation or love. So clinging, we conceive a conceptual understanding (verbum mentis).[25] This understanding is shaped by our attraction to some aspect of the thing known; knowledge is contoured to the knower.[26] Or, as Thomas Aquinas would later hold, quidquid recipitur ad modum recipientis recipitur; whatever is received, is received according to the measure or manner of the receiver.[27] Many such acts of understanding, according to Augustine, weave the habitual fabric upon which reality is known.

More perfectly than any particular user of the machine learning system, the network itself is an image of an idolatrous fabric of mind: its interior workings represent those facets of reality that we have chosen to represent to it, as reshaped in being mapped to the outputs that represent the willed purposes of its designers. And so, to the extent that we rely upon the network for real-world action, we carry forward into the world the stance of will—and only the stance of will—that the network implicitly represents.

The Cost of Idolatry

Is all of this really a problem? After all, it is nothing new to say that, to a hammer, every problem looks like a nail. In other words, if we use neural networks for what they are useful for—if we accurately interpret their outputs’ scope of meaning, then we will not go wrong. But this is not wholly true. The concept of idolatry points to systemic cognitive effects of how we treat AI. Too often, it is credited with an esoteric power to penetrate reality in a way that transcends the limits of human understanding. While it certainly exceeds our powers of correlation and inference, contemporary machine learning cannot transcend the human horizon because it maps human-selected data-points to human-interpreted outputs of inference and action. If we credit it with more—if we neglect the crucial role of our own will—then, as the psalm warns concerning idols, “Those who make them become like them; so do all who trust in them” (Psalm 115:8). That is, having made our idols to manipulate some sphere, we will be bound by what they can represent.

Concretely, we may think of the case of predictive policing: PredPol and other such systems direct police units to anticipated geographical hotspots. These are selected by networks trained on past reported crime events, weather conditions, ATM and convenience store locations, and other environmental factors.[28] Patrolling these locations appears to reduce the incidence of crimes, often less by arrest than by deterrent.[29] However, the apparent objectivity of PredPol’s ongoing training cannot procure some trans-human predictive insight. Crimes are reported by citizens, but also by police.

An elevated predicted crime rate will call forth greater police presence, possibly increasing the assessed crime rate as officers seek (and find) otherwise-invisible infractions. These locations will then appear more distinctively crime-ridden than they are, further biasing the system. Additionally, over-policing—intensive crime-seeking, more aggressive police responses in assumedly high-crime areas, and more stops of innocent persons—may break down community relations, exacerbating negative outcomes in the long term.[30] When we feign independence from the world that we seek to control—when we ignore the fact that the system feeds on our own AI-driven interventions—then we conform the world to the idol. In that the AI is a means of control and not just of response to the world, we risk deforming the world by our use of it. By policing according to prediction, we procure—or at least we believe we see—the results that were predicted.

Resolution by Incarnation

What may we do about the idolatrous use of biased AI? I do not have the time to unfold a full solution here, but I can say this much: While some address AI bias with case-by-case tweaks,[31] a theological imaginary contemplates radical solutions, asserting that the ultimate context framing all goodness and meaning—the context wherein this problem could actually be resolved—must necessarily transcend human and artificial understanding. In other words, a theological imaginary speaks of God.

The Christian tradition holds that idolatrous superbia is resolved by revelation and by love. God is the always-excessive frame within which alone reality can be understood as it is and not just according to our particular wills; and, since no knowledge is possible apart from our particular loves, a true knowledge will ultimately always love the known thing in God.

For PredPol, this means that, when officers swarm a predicted hot spot, they must arrive not as crime-seeking servants of the idol but as—yes—loving humans seeking to serve other humans. A simple solution, but one that was difficult long before machine learning. Augustine would trace this difficulty back to the distortion of our wills by superbia. We seek idols because we seek to make ourselves gods. What we need is for the fundamental and final context—God himself—to reveal himself on our terms; but to receive this revelation we must be given by God a new love by which to know him as he is, beyond the limitations of our finite wills, not as rivals to him, reducing him to the scope of our lives, but as fellows, participants in his life. Indeed, that this has happened and is happening is the message of Christianity.

Or as another ancient writer put it: God became human, that we might become gods.[32] These reflections are far from what most computer scientists, technologists, and philosophers would consider to be home territory. But that is precisely the point: We can avoid the solipsistic disorientation of the technological idol only when grounded in something quite beyond us, quite beyond our mastery and illusions thereof, and yet quite very much our home.


[1] Peter G. Kevan, Lars Chittka, and Adrian G. Dyer, “Limits to the Salience of Ultraviolet: Lessons from Colour Vision in Bees and Birds,” Journal of Experimental Biology 204, no. 14 (July 15, 2001): 2571–80.

[2] David J. Kelly et al., “Three-Month-Olds, but Not Newborns, Prefer Own-Race Faces,” Developmental Science 8, no. 6 (November 2005): F31–36.

[3] On the varieties of AI bias, see David Danks and Alex London, “Algorithmic Bias in Autonomous Systems,” in Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, 2017, 4691–97.

[7] Julia Angwin et al., “Machine Bias,” ProPublica, May 23, 2016; Jeff Larson et al., “How We Analyzed the COMPAS Recidivism Algorithm,” ProPublica, May 23, 2016. Compounding the problem, it remains difficult to develop a standard of fairness that would overcome this bias without introducing others; thus Sam Corbett-Davies et al., “A Computer Program Used for Bail and Sentencing Decisions Was Labeled Biased against Blacks. It’s Actually Not That Clear,” Washington Post, October 17, 2016.

[8] Ziad Obermeyer et al., “Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations,” Science 366, no. 6464 (October 25, 2019): 447–53.

[9] “The City of God, Against the Pagans [413–427],” in St. Augustine’s City of God and Christian Doctrine, trans. Marcus Dods, Nicene and Post-Nicene Fathers, First Series 2 (Buffalo, N.Y.: Christian Literature Publishing Co., 1887), sec. 14.13.

[10] “The Literal Meaning of Genesis [De Genesi Ad Litteram] [401-415],” in On Genesis, trans. Edmund Hill, WSA, I/13 (Hyde Park, N.Y.: New City Press, 2004), sec. 11.14.18-11.15.20; The Confessions [397-401], trans. Maria Boulding, 2nd ed., WSA, I/1 (Hyde Park, N.Y.: New City Press, 2012), sec. 1.1.1.

[11] Augustine of Hippo, “City of God,” sec. 11.16.

[12] Augustine of Hippo, Enarrationes in Psalmos, ed. Eligius Dekkers and J. Fraipont, CCSL 38, 39, 40 (Turnhout: Brepols, 1956), sec. 115 (Expos. 2).

[13] My account here has some parallels with the Heideggerian “standard critique” of technology as described in Alan Jacobs, “From Tech Critique to Ways of Living,” The New Atlantis, Winter 2011.

[14] Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (New Haven, Conn.: Yale University Press, 2021), 134.

[15] Crawford, Atlas of AI, 147.

[16] Crawford, 132. See also 146.

[17] Crawford, Atlas of AI, 144. See also 133.

[18] Crawford, Atlas of AI, 133.

[19] On ourselves as ultimate, see Veronica Roberts Ogle, “Idolatry as the Source of Injustice in Augustine’s De Ciuitate Dei,” Studia Patristica 14 (Fall 2017): 69–78.

[20] On artifacts in general, see Amie L. Thomasson, “Artifacts and Mind-Independence: Comments on Lynne Rudder Baker’s ‘The Shrinking Difference between Artifacts and Natural Objects,’” APA Newsletter on Philosophy and Computers 8, no. 1 (2008): 25–26. On computational artifacts, I generally follow Gualtiero Piccinini, Physical Computation: A Mechanistic Account (New York: Oxford University Press, 2015). I am also deeply sympathetic with Paul Schweizer, “Computation in Physical Systems: A Normative Mapping Account,” in On the Cognitive, Ethical, and Scientific Dimensions of Artificial Intelligence: Themes from IACAP 2016, ed. Don Berkich and Matteo Vincenzo d’Alfonso, Philosophical Studies Series (Cham: Springer International Publishing, 2019), 27–47.

[21] Judea Pearl, “The Limitations of Opaque Learning Machines,” in Possible Minds: Twenty-Five Ways of Looking at AI, ed. John Brockman, 1st ed. (New York: Penguin Press, 2019), 18. See also Cameron Buckner, “Deep Learning: A Philosophical Introduction,” Philosophy Compass 14, no. 10 (2019): e12625.

[22] Peter Norvig, “On Chomsky and the Two Cultures of Statistical Learning,” 2011, http://norvig.com/chomsky.html. Emphasis original.

[23] Obermeyer et al., “Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations.”

[24] Moral evils like murder are “good” only in, say, involving voluntary motion. However, the act itself forestalls any goodness beyond the bare fact of this motion, in intentionally extinguishing the goodness of one personal life by the agent’s ugly inter-personal attempt at absolute domination.

[25] Augustine of Hippo, De Trinitate Libri XV [399-419], ed. W. J. Mountain, CCSL 50, 50A (Turnhout: Brepols, 1968), bks. 11, 14. Luigi Gioia writes: “The process of knowledge is set off by desire for the object to be known and is completed only through union with the object known through love;”

[26] Luigi Gioia writes that, for Augustine, “intellectual knowledge is not the result of an ‘infusion’ in our mind of a pre-existing reality, but the production of a new reality, a notion, which, for this reason, is compared to the inner-begetting of a word.” See Luigi Gioia, The Theological Epistemology of Augustine’s De Trinitate, Reprint ed. (New York: Oxford University Press, 2016), 200. See also John C. Cavadini, “The Quest for Truth in Augustine’s De Trinitate,” Theological Studies 58, no. 3 (September 1, 1997): 429–40.

[27] See Thomas Aquinas, Summa Theologiae [1265-1273], ed. The Aquinas Institute (Emmaus Academic, 2012), sec. I.12.4. Cognitum est in cognoscente secundum modum cognoscentis. “The thing known [by experience] is in the one knowing according to the disposition/manner of the one knowing.”

[28] John Zerilli et al., A Citizen’s Guide to Artificial Intelligence, 2021, 48–51.

[29] Aaron Chalfin et al., “Police Force Size and Civilian Race” (National Bureau of Economic Research, December 14, 2020); “Hot Spots Policing,” The Center for Evidence-Based Crime Policy (CEBCP), accessed July 7, 2021, https://cebcp.org/evidence-based-policing/what-works-in-policing/research-evidence-review/hot-spots-policing/.

[30] Will Douglas Heaven, “Predictive Policing Is Still Racist—Whatever Data It Uses,” MIT Technology Review, February 5, 2021; Andrew Guthrie Ferguson, The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement, Paperback Ed. (NYU Press, 2020), 78–83.

[31] Irene Chen, Fredrik D. Johansson, and David Sontag, “Why Is My Classifier Discriminatory?,” ArXiv:1805.12002 [Cs, Stat], December 10, 2018.

[32] Adapted from Athanasius of Alexandria, On the Incarnation, trans. John Behr, Popular Patristics Series 44B (Yonkers, N.Y.: St. Vladimir’s Seminary Press, 2012), sec. 54.

Featured Image: Anonymous author possibly from the workshop of Bosch, Augustine Sacrificing to a Manichean Idol, c.1480; Source: Wikimedia Commons, PD-Old-100.

Author

Jordan Wales

Jordan Wales is an Assistant Professor of Theology at Hillsdale College. His research focuses on early Christian understandings of grace and the vision of God, as well as contemporary questions relating to AI. He received his M.T.S. and Ph.D. in Theology from the University of Notre Dame after earning a Diploma in Theology from Oxford and a M.Sc. in Cognitive Science and Natural Language from the University of Edinburgh.

Read more by Jordan Wales