The Application of Bernard Lonergan's Cognitional Structure in Ethical AI

Late one afternoon in Detroit, a man named Robert Williams was arrested on his front lawn in front of his horrified family. He was hauled off to jail and held for nearly thirty hours for a theft he did not commit—all because a facial recognition AI claimed he was the culprit.[1] There was just one problem: the AI was wrong. The surveillance image was blurry, and aside from being African American men of similar build, Robert and the actual suspect looked nothing alike. “How does one explain to two little girls that a computer got it wrong, but the police listened to it anyway?” Robert later lamented. His daughters had seen their innocent father taken away in handcuffs due to a machine’s mistake, an error that left a permanent scar of injustice on the family.

This real-life incident starkly illustrates the ethical stakes of employing artificial intelligence in certain domains. As AI systems quietly permeate daily life—from the algorithms curating our news feeds to the automation guiding hiring decisions and policing—each promises efficiency and insight, but each also carries the risk of unintended consequences. A wrongful arrest, biased loan approval, or a distorted social media bubble can upend lives and erode trust. How do we respond to such challenges? How can we ensure these powerful technologies serve justice and the common good, rather than undermine them? We need more than technical fixes; we need wisdom and moral discernment.

In search of that discernment, it helps to turn to an unexpected guide: the twentieth-century Jesuit philosopher and theologian Bernard Lonergan. Lonergan is known for articulating five “transcendental precepts” as habits of mind and heart for pursuing truth and goodness: Be attentive, be intelligent, be reasonable, be responsible, and be loving.[2] These principles were originally formulated to describe how humans authentically come to know and decide, rooted in what Lonergan called our “unrestricted desire to know”—a God-given drive toward truth and value. Though he lived long before the rise of modern AI, Lonergan’s insights provide a remarkably relevant compass for navigating the ethical complexities of the AI age.[3] Each precept offers a lens through which we can examine our interaction with AI: from what we notice, to how we think, judge, act, and ultimately love in a world increasingly shaped by algorithms.

What follows is a narrative journey through these five precepts applied to our contemporary encounter with AI. By being attentive to how AI shapes our experience, being intelligent about its capabilities and limits, being reasonable in weighing its promises against its perils, being responsible in how we design and use it, and being loving in ensuring it serves human dignity, we can chart a path that is both technologically savvy and morally sound. In doing so, we answer the urgent call to guide AI with human wisdom, rather than be guided by AI without reflection.

Lonergan’s Precepts in the AI Age

Lonergan’s five precepts form a sequence of steps for ethical discernment that can ground us amidst the rapid advances of AI.[4] They move from observation to understanding, to judgment, action, and ultimately orientation toward the good. Woven into a narrative, these principles can transform how we approach everyday encounters with intelligent machines. Let us consider each precept and its meaning for the age of AI.

Be Attentive

Pay attention to what is really there. This first precept urges us to consciously notice our experience, to actively perceive rather than drift passively. In the context of AI, being attentive means recognizing how our perception of the world is increasingly mediated by algorithms—often in subtle ways we barely notice.

Consider the simple act of reading the news. Many of us scroll through feeds on Facebook, Twitter, or Google News, trusting that we are seeing a broad slice of current events. In truth, sophisticated AI algorithms are curating what we see, selecting stories based on our past clicks, location, and profile. Over time, this personalization can wrap us in an invisible cocoon of like-minded content, a filter bubble that reinforces our biases. “If the algorithm shows you only the news that it thinks you are going to like . . . you may not know that these other perspectives even exist,” one computer scientist warns, noting that because it is all done “behind the scenes,” we scarcely notice our information diet being narrowed.[5] Without attentiveness, we can live under the sway of algorithmic preferences, mistaking a partial view for the whole truth.

Being attentive in the AI age, then, requires a kind of digital mindfulness. We have to ask: What is my smart assistant not telling me? Which voices might be missing from my social media feed? How are AI systems shaping my perception of reality? This might mean noticing that after watching one YouTube video, the “Up Next” suggestions start pulling us down a more extreme path or realizing that a navigation app’s “fastest route” is actually collecting traffic data from us in exchange. It means paying attention to potential biases in AI outputs—like when an image recognition app consistently struggles with darker-skinned faces, or when an autocomplete suggestion seems tinged with prejudice. In short, Be Attentive calls us to remain awake to what AI is doing around us and to us. Only by seeing clearly can we hope to respond justly.

Be Intelligent

Next comes understanding. Being intelligent means digging into the why and how—seeking insight, making connections, gaining knowledge. Lonergan’s call to be intelligent invites us to move beyond surface impressions and ask deeper questions about AI systems: How do they work? What are their limits? What assumptions are they built on?

In practice, this might involve learning some basic facts about AI. For instance, most AI algorithms learn from historical data. That means if the data is flawed or biased, the AI’s “intelligence” will inherit those flaws. A striking example emerged a few years ago when Amazon developed an experimental AI to streamline hiring. The goal was to have a program sift through résumés and identify top candidates. But the project hit a snag: by 2015 the engineers realized the AI was ranking candidates in a decidedly ungenerous way toward women.[6] Why? The algorithm had trained itself on ten years’ worth of résumés, most of which came from men (reflecting the tech industry’s male dominance). In effect, the machine concluded that being male was a prerequisite for being a good engineer. It began downgrading résumés that mentioned “women’s” (as in “women’s chess club captain”) or that came from women’s colleges. The AI was doing exactly what it was taught—finding patterns in data—but it lacked the understanding to know that those patterns were reflecting past bias, not future merit. Amazon wisely scrapped that tool once its blind spots became apparent.

This cautionary tale underscores why being intelligent about AI is essential. We must strive to understand an AI system’s potential as well as its limits. Yes, AI can detect patterns in vast datasets far beyond human capacity, yielding helpful insights—from spotting early signs of disease in medical scans to predicting weather disasters. But AI is not magic, nor is it infallible. It does not truly “understand” context or meaning; it finds correlations, not causes. And it will faithfully amplify whatever is embedded in its training.

In the realm of law enforcement, for example, some cities adopted predictive policing algorithms hoping to reduce crime by allocating police patrols based on crime statistics. In reality, these tools often ended up perpetuating racial bias. They sent officers to the same over-policed neighborhoods again and again, because the historical data itself was skewed by decades of disproportionate policing in minority communities.[7] The result can be a vicious cycle: more patrols yield more recorded incidents, which then justify more patrols, all the while unfairly targeting people of color.

An intelligent approach recognizes this pitfall. It demands transparency about how an AI makes decisions and vigilance about the quality of its data. It also means acknowledging what AI cannot do: it cannot assess value, morality, or the full uniqueness of a human situation. Being intelligent is not about having all the technical know-how, but about cultivating a prudent understanding of AI’s capabilities and weaknesses. In Lonergan’s terms, it is the difference between raw data and meaningful insight. We need that insight to use AI wisely.

Be Reasonable

Knowledge alone is not enough; we must judge what to do with it. Being reasonable means applying critical evaluation and sound judgment to the insights we have gained. It involves asking Is this true? Is this good? and weighing options in light of evidence and ethical principles. In an AI-saturated world, being reasonable requires us to sift through the hype and fear around technology and assess AI’s role with clear-eyed honesty and moral clarity.

One aspect of reasonableness is deliberating about AI’s trade-offs and consequences. Just because we can do something with AI does not automatically mean we should. For example, AI-driven content filters on social media can block extremist propaganda and hate speech—but they might also unintentionally censor legitimate discussion or artistic expression. Is the trade-off worth it? Similarly, equipping autonomous drones or weapons with AI might protect our soldiers in combat, but what are the risks of errors or the moral cost of delegating life-and-death decisions to a machine? Or imagine a self-driving car faced with an impossible accident scenario: should it swerve to avoid a pedestrian if doing so would sacrifice its passenger? There is no simple algorithm for such ethical dilemmas; they force us to clarify human values and priorities. A reasonable approach does not leave these questions to engineers alone; it insists that ethicists, philosophers, and communities have a voice at the table when weighing benefits versus harms.

Being reasonable also means critiquing the grand narratives that often accompany new technology. One such narrative is the myth of technological determinism—the idea that AI’s advance is inevitable and essentially beyond our control. This outlook, sometimes encouraged (implicitly or explicitly) by Big Tech visionaries, can lull society into a passive acceptance of whatever Silicon Valley serves up.[8] If “the AI revolution” is unstoppable, why bother questioning it? But Lonergan’s framework urges us to reject that fatalism.

Machines do not just evolve by themselves; humans design, train, and deploy them. The future of AI will be shaped by human choices and values, whether we acknowledge that or not. Reason requires that we reclaim our agency. We should scrutinize bold claims (for instance, that AI will soon surpass human intelligence in all fields) and distinguish realistic progress from sci-fi fantasy or marketing spin. We should also keep in view AI’s intrinsic limits: no matter how sophisticated, an AI lacks self-awareness, empathy, and the innate orientation toward truth and goodness that humans possess.[9]

In other words, however “smart” it seems, an AI is not a moral agent. Being reasonable guards us from attributing more wisdom or authority to AI than it deserves. It keeps us centered on the fundamental truth that we are responsible for the tools we create. This critical, truth-seeking posture is vital if we are to steer technology toward genuine progress and avoid being swept up in unexamined enthusiasm.

Be Responsible

With knowledge illumined by reason, Lonergan’s fourth precept calls us to be responsible—to act in accordance with our best judgment of what is right. Responsibility, in this context, has a two-fold application: it speaks to those who design and deploy AI systems, and it speaks to all of us who use or are affected by them. It is a reminder that ethics is ultimately about concrete decisions and actions, not just abstract principles.

For AI designers and developers, being responsible means consciously building systems that align with ethical values and serve the common good. This could include following emerging best practices for “ethical AI” design: auditing algorithms for bias, ensuring transparency about how decisions are made, and including diverse voices in the development process. It also means resisting the pressure to rush a product to market without adequate safety checks. For instance, if a tech company is developing facial recognition software, a responsible approach might involve setting strict accuracy benchmarks across different demographic groups before deployment, and even choosing not to sell the product to certain end-users (like government agencies with poor human-rights records). The key is that developers acknowledge their accountability for the societal impacts of their creations.[10] They cannot simply say “the algorithm decided” as if that absolves them—programmers write the algorithm, and the data it learns from reflects human history. Thus, responsibility must be “designed in” from the start.

For everyday users and citizens, being responsible with AI means taking ownership of how we let these technologies into our lives. It is about active engagement rather than passive consumption.[11] In practical terms, this might involve simple actions and choices that, collectively, steer AI use in a better direction. For example:

  • Set personal boundaries with AI: limit your reliance on AI gadgets that might be addictive or intrusive. A responsible user might decide to turn off algorithmic autoplay on a video platform to avoid endless scrolling, or schedule regular “digital Sabbaths” to stay in control of one’s attention and not be constantly manipulated by algorithms.
  • Demand transparency and fairness: support companies and products that are open about their AI practices and that prioritize user privacy and data ethics. This could mean using a search engine that does not track your every move, or choosing not to use an app that requires invasive permissions without justification.
  • Advocate and educate: being responsible extends to the public sphere—engage in conversations and policy debates about AI. One might advocate for laws that prohibit clearly harmful uses (for instance, banning the use of biased AI in hiring or policing) and support initiatives for robust oversight of AI deployments.[12] Likewise, educating oneself and others (in our families, churches, and communities) about both the benefits and risks of AI is a responsible way to build a more informed society.

Each of these actions reflects the insight that we are not helpless spectators of the AI revolution. Rather, we are stewards of these powerful tools. In theological terms, being responsible with technology is part of our vocation to be good stewards of creation—now expanded to include the digital creations of human ingenuity. Just as we are responsible for how we treat the environment or our neighbors, we bear responsibility for how we “treat” AI (and how AI treats others). And this leads to Lonergan’s final and ultimate precept, which gives all the others their deepest meaning.

Be Loving

Lonergan often described the fifth precept as “Be in love,” by which he meant an unconditional orientation toward value, goodness, and ultimately God.[13] To be loving is to put charity at the center of all our knowing and doing. It is no coincidence that the greatest commandments in the Christian faith are to love God and love our neighbor. Any ethics that stops at responsibility without ascending to love would remain cold and incomplete. Love perfects and crowns the ethical life, directing it toward self-gift and the genuine good of others.

What does it mean to bring love into our relationship with AI? It means, first and foremost, keeping human dignity and the common good as our north star. Be Loving reminds us that every technical decision is ultimately a decision about people—about how we honor or dishonor the image of God in others. If an AI application, no matter how impressive, undermines human dignity or severs human connection, love compels us to question or even reject it. Conversely, if a technology can be harnessed to uplift people, to heal, to include, to empower the marginalized, then love urges us to pursue and support it.

On a personal level, being loving in the digital age might involve deliberate choices about technology’s role in our relationships. For instance, using AI in ways that foster community rather than isolation. This could be as simple as leveraging a messaging app to stay in touch with distant family (instead of doomscrolling alone), or as complex as designing church outreach programs that use data analytics to better serve the poor. It also means guarding against the ways AI can fray our social bonds. Social media algorithms often prioritize posts that provoke strong reactions, which can sow division. A loving approach would seek to counteract that—perhaps by intentionally reaching out offline to someone with whom we disagree, instead of trading barbs on a Facebook thread. In other words, love should shape the way we use technology, not the other way around. As one recent Vatican document noted, if AI is used to help people foster genuine connections, it can contribute positively to the flourishing of the person. But that requires a conscious choice to put human well-being first.

On a broader ethical level, Be Loving translates into a commitment that AI must ultimately serve what is truly good for humanity. Pope Francis has emphasized this repeatedly, urging that new technologies be employed in ways that promote human dignity and the common good.[14] Love in action means we measure AI’s success not just by profitability or efficiency, but by how it impacts the most vulnerable among us. Does a banking algorithm treat the poor fairly? Does a content recommendation system protect children from harm? Do our AI-driven conveniences also care for those who may lose jobs due to automation?

A lens of love keeps sight of the person behind the data point. It insists that people are ends, not means, and that technology should be a tool of care, not an excuse to avoid caring. In Christian terms, if we create AI systems that help “the least of these” (Matt 25:40)—such as diagnostic AIs for underserved hospitals or translation apps for refugees—then we are on the path of love. If we do the opposite, allowing technology to devalue or exploit people, we have lost the plot of our own humanity.

Practical and Theological Reflections

How might these principles take root in our communities—especially our faith communities? The task of guiding AI ethically is too large for any one group; it requires what Pope Francis called a broad dialogue between believers and non-believers on fundamental moral questions raised by technology.[15] The Church, with its two millennia of wisdom about human nature, sin, and grace, has a crucial role to play in this cultural discernment. But it must do so humbly and collaboratively, learning about the technology even as it brings theological and ethical insights to bear.

First, faith communities can lead by example in the discernment of AI’s use. Just as churches have learned to evaluate media like television or the internet critically, so too with AI. For instance, a parish might host a workshop on “Christian life in a digital age,” helping parishioners reflect on their use of smartphones, social media algorithms, and AI assistants in light of their faith. Pastors and ministry leaders could encourage mindful practices: perhaps suggesting a “tech examen” at day’s end, where people prayerfully consider how their use of digital tools that day drew them closer or farther from God and neighbor. Such reflections ground the act of being attentive, intelligent, reasonable, etc., in a spiritual context. They also send a message: that it is not Luddite to question technology—it is wise.

Second, the Church can contribute to shaping public ethics and policies on AI. The Catholic tradition of social teaching provides a rich framework (principles like the common good, subsidiarity, human dignity, preferential option for the poor) that can fruitfully inform tech ethics. Take the principle of the common good: it urges that we consider the benefit of all people, especially the vulnerable, in any social decision. Applied to AI, this could mean advocating for regulations that protect communities from harmful AI-driven decisions or pushing for equitable access to beneficial AI (like using AI in medicine to serve impoverished areas, not just the wealthy).

In recent years, the Vatican itself has stepped into this conversation—co-sponsoring the Rome Call for AI Ethics and convening experts through events like the Minerva Dialogues which bring tech leaders and theologians together to discuss the societal impacts of AI.[16] Such interdisciplinary conversations are vital. They break down the silo that often exists between tech developers and humanistic scholars. It is a hopeful sign to see computer scientists, philosophers, and bishops sitting at the same table, grappling with questions of machine learning and moral responsibility. The more we encourage these dialogues—in academic conferences, in government hearings, in ecumenical gatherings—the more we build a shared understanding that can guide AI toward positive ends.

Theologically, we might view the emergence of AI through the lens of human co-creativity under God. Our ability to invent complex algorithms and intelligent machines is an expression of the creativity God bestowed on humanity. In the book of Genesis, humans are tasked to “till and keep” the garden of creation (Gen 2:15); we are also called to “participate responsibly in God’s creative action” in the world.[17] Developing technology is one way we exercise that vocation. But with any such exercise comes moral responsibility. Just as using our creativity to build a bridge or a medical breakthrough carries ethical implications, so does creating AI.

The Church can remind us that technology is not morally neutral—it inherits the values of its makers and users. Therefore, building AI is not just an engineering task but also a moral task. Framing it this way can inspire Christians in tech fields to see their work as a form of stewardship or even discipleship: coding to serve Christ by serving others. It can also remind all of us that our ultimate loyalties cannot lie with technology or progress for its own sake, but with the God who entrusts us with these gifts for the good of our neighbors.

Finally, a practical step forward is education and dialogue at the grassroots. Parishes, universities, and Christian groups could foster study circles that read about AI ethics, ensuring that the conversation is not limited to experts. Such initiatives demystify AI for ordinary folks and dispel both naive optimism and excessive dread. They also equip believers to bring their voice to the public square, advocating for an AI future that reflects our values. Imagine church-based committees that, much like social justice committees or creation care teams, specifically focus on technology ethics—helping the community stay informed and engaged on issues from data privacy to deepfakes. These could partner with local tech companies or policymakers to provide feedback and guidance rooted in moral principles. When theologians, ethicists, engineers, and users come together in goodwill, we have a better chance of crafting AI systems and policies that uplift rather than harm.

Conclusion

The rapid ascent of AI presents us with choices that will shape the soul of our society. Will we drift along, allowing automation and algorithms to dictate the terms of human life? Or will we approach these tools with discernment and intentionality, ensuring they contribute to human flourishing? Bernard Lonergan’s transcendental precepts—Be Attentive, Be Intelligent, Be Reasonable, Be Responsible, Be Loving—offer a timeless framework for exactly this kind of discernment. They remind us that however novel our technology, the fundamental processes of good judgment and moral action remain the same. We must open our eyes to reality, seek understanding, judge wisely, act ethically, and center it all in love.

For people of faith, and indeed all people of goodwill, the call is to apply these habits of mind and heart to the realm of AI. This means cultivating a culture of AI discernment in our personal lives, our communities, and our institutions. It means refusing to be overawed by the shiny promises of tech, and equally refusing to demonize technology, instead taking the harder path of guided engagement. It means insisting that humanity remains at the helm, charting the course of technological progress with a firm hand on the moral compass.

There is ample reason for hope. Around the world, ethicists, engineers, and religious leaders are increasingly joining forces to ensure AI is developed responsibly. Governments are beginning to draft laws for algorithmic fairness and transparency. Tech companies, under public pressure, are talking more about ethics than ever before. Catholic universities are developing Church communications ecology programs. And countless individuals are learning to navigate their smartphones and AI assistants with more awareness. These are signs that we can shape the future of AI rather than passively suffer it.

As Christians, we approach the future with hope, not because of human efforts alone, but because we trust in a God who guides history. If we bring the best of our tradition—its wisdom, its emphasis on the dignity of each person, its commandment to love—into the conversation about AI, we act as salt and light in the digital world. The ethical challenges of AI are, in the end, reflections of age-old challenges about power, pride, justice, and charity. The tools are new, but the human drama is not. In every era, the Church has been called to discern the signs of the times; today one of those signs glows in neon circuits and code. By following Lonergan’s counsel to be attentive, intelligent, reasonable, responsible, and loving, we can ensure that even as we innovate, we do not lose our humanity or our soul.

The final measure of success will not be how intelligent our machines become, but how wisely and lovingly we harness that intelligence for the betterment of all. In the words of one analysis, we must “assert the primacy of the human person” in the development and use of technology, aiming for a world where technology truly serves humanity, not the other way around.[18] With hearts grounded in love and minds guided by truth, we can embrace the tools of AI without surrendering what makes us authentically human. That is the hope and the challenge before us—and by God’s grace, it is a challenge we can meet for the sake of the common good.


[1] Victoria Burton-Harris and Philip Mayor, “Wrongfully Arrested Because Face Recognition Can’t Tell Black People Apart,” ACLU, June 24, 2020.

[2] Lonergan, Bernard J. F. Method in Theology. Collected Works of Bernard Lonergan, (CWL 14), edited by Robert M. Doran and John D. Dadosky, Toronto: University of Toronto Press, 2017, 22-23.

[3] Steven Umbrello, “Navigating AI with Lonergan’s Transcendental Precepts,” Evangelization & Culture Online, April 25, 2024.

[4] Ibid.

[5] Casey Moffitt and Linsey Maughan, “Bias in the Bubble: New Research Shows News Filter Algorithms Reinforce Political Biases,” Illinois Tech, November 1, 2021.

[8] Steven Umbrello, “Navigating AI with Lonergan’s Transcendental Precepts,” op. cit.

[9] Ibid.

[10] Ibid.

[11] Ibid.

[12] NAACP, Artificial Intelligence in Predictive Policing Issue Brief, op. cit.

[13] Steven Umbrello, “Navigating AI with Lonergan’s Transcendental Precepts,” op. cit.

[14] Deborah Castellano Lubov, “Pope Francis Urges Ethical Use of Artificial Intelligence,” Vatican News, March 27, 2023.

[15] Ibid.

[16] Ibid.

[17] Ibid.

[18] Steven Umbrello, “Navigating AI with Lonergan’s Transcendental Precepts,” op. cit.

Featured Image: Bernard Lonergan lecturing, date unknown, author also unknown, presumed to be Public Domain, otherwise Fair Use.

Author

Taylor Black

Taylor Black leads strategic programs in the Office of the CTO at Microsoft, where he explores the frontiers of innovation and corporate entrepreneurship. With advanced degrees in philosophy and law, he fuses intellectual rigor with real-world practicality as an instructor in the UW Foster School of Business and as a deacon candidate in the Byzantine Catholic Eparchy of Phoenix.

Read more by Taylor Black