What Will a Future with Androids Among Us Look Like?

What will it be like when we have robot assistants and companions, that we own, that are not human, and yet that activate all of our instinctive empathy toward humans? Shall we have to unlearn this empathy? Or will we account for that empathy by instinctively redefining humanity in terms of behavior without any sense of inner life? And what would that do to us?

We live in a wondrous time, in which artificial intelligence is increasingly and impressively a part of our daily lives. As an informed observer, I expect that contemporary techniques will eventually yield humanoid robots that—in professional interactions, casual conversations, and even shallow romantic relationships—will seem persuasively human. That is, even if they do not look quite like us, their movements, appearance, and conversation will evoke from us all the empathy that a three-year-old can bestow on a motionless toy bear and that adults habitually reserve for fellow humans. What we see now is just the beginning. Concerning a future of pervasive and persuasive robots, we must ask two questions: What is it that we will have made? And more importantly, through these artificial entities, what will we make ourselves become?

What Will We Have Made?

From among the many possible inquiries as to “what” we will have made, I should like to ask: How would it work? And what would it be? The former is easy; the latter, a little more difficult—but perhaps not so difficult as we might think.

Symbolic AI and the Quest for Artificial Reasoning

So first, how would it work? What computer scientists have called “artificial intelligence” has always reflected something of how their times have thought about human beings. Rooted in Thomas Hobbes, the dominant views of the 1950’s and 60’s equated human reasoning with the capacity to identify and work with logical relations, such that a properly-programmed computer, accomplishing this task, would in fact be “thinking.” This was the age of “symbolic AI,”[1] founded on the hypothesis that intelligence was rooted in the logical manipulation of symbolically-represented information. Thus, for instance, in AI efforts that focused on language, to diagram a sentence and to construct a plausible response could be deemed equivalent to having understood that sentence (although not all agreed). Symbolic AI’s greatest achievement was in “expert systems”—great structures of linked rules that, when queried, would generate a list of possible answers, perhaps posing further questions to the user in order to prune the tree of possible resolutions to a problem. The most thrilling application of this technology was the Deep Blue chess computer that in 1997 defeated reigning world champion Gary Kasparov in three out of three games.

With time, however, the symbolic AI paradigm came up against certain limits, both practical and theoretical. Far from extending toward a generalized capacity to deal with all knowledge, expert systems could break down in situations of great subtlety, where the interactions of tens of thousands of rules yielded unexpected and incorrect behaviors. Early researchers had intentionally and successfully implemented Aristotle’s theory of syllogistic reasoning and action (e.g. I wish to be dry in the rain, an umbrella will keep me dry in the rain, I will use my umbrella in the rain). However, symbolic methods could not very well represent knowledge that was less precisely-defined, such as, for instance, one’s sense for what is appropriate in a social situation, one’s route through a wood rather than through a hospital or, yes, the next move in a chess game. Language, especially, turned out to be far less easy to interpret or to produce than had been hoped. In the words of IBM’s Murray Campbell, human intelligence “is very pattern recognition-based and intuition-based,” unlike “search intensive” methods that may check “billions of possibilities.”

Most tellingly, symbolic techniques were insufficient for fielding embodied agents in the real world—i.e. robots. Humans can move easily from sensation to conceptual thought and thence to action. This wider field of intelligent behavior has been the subject of deep reflection from antiquity to today. Thus, Aristotle writes not only of syllogisms but also of the equally-fundamental activity of “abstraction.”[2] In abstraction, something apprehended through the senses (e.g. this round taut-skinned tart-tasting spherical object), comes to be understood consciously as an instance of some more general category (e.g. apple)—that is, from sensation one comes understand some thing. Symbolic methods proved clumsy and brittle when it came to distinguishing and identifying objects captured on camera or interpreting human speech recorded through a microphone—tasks that were once expected to be easy in comparison to supposedly higher-level activities such as playing chess.

Non-symbolic AI and Neural Networks

These problems, along with immense advances in computing power and machine learning theories have brought contemporary prominence to so-called “non-symbolic AI,” often implemented by artificial neural networks. An artificial neural network is a computer program that mathematically simulates an interconnected set of idealized brain neurons. As an AI technique, then, it begins less from a notion of what human thought is than from an analogy with its biological aspects. That is, the goal of such networks is not human-like thinking but rather neuron-like data-processing.

Artificial neural networks receive a pattern of information through input nodes, which are connected with various strength to layer upon layer of further nodes. At each particular node, when the sum of the incoming connections exceeds some pre-set threshold, that node will fire and its own signal will be transmitted variously to nodes on a further layer, and so on. If you put in a pattern at the beginning, it is transformed as its elements are recombined and processed until something else comes out on final layer of the network. A network can be “trained” to produce the desired behavior by adjusting the strengths of its connections, thus adjusting the contribution made by each node to each recombination and, in due course, to the final result. A piano offers a poor analogy but a useful image. If you have ever shouted into a piano with its sustaining pedal held down, then you have heard its tuned strings resonate with the different frequencies of your shout. One receives back a sort of echo, not of one’s words but of the tones of one’s voice. Similarly, as a neural network is tuned (i.e. as its connection strengths are adjusted), it begins to resonate with the entangled relations that are implicit in our world, including relations that cannot easily be discerned or logically represented by human investigators. But by its training, the network does not just echo, but transforms its input in order to make explicit the relations that are of interest to the trainer.

Neural networks are part of the AI that underlies self-driving cars, programs that beat world champions in the game of Go, the ever-useful Google Translate, your webmail’s autocomplete function, and of course the tempting recommendations in your Spotify, Pandora, and Netflix feeds. Such problems as bedeviled the old symbolic AI can be solved handily by a neural network because, in a manner of speaking, the network is receptive to, imprinted by the structure of the world as presented to it. We might say that it develops a point of view: Not a conscious experience, but something like the classical notion of the mind’s adequation to a thing[3]—although here that adequation is always constrained by the task for which the AI is trained.

What Will They Be? The Behaviorist Turn

Eventually, I do believe that powerful neural networks will enable the behavior of incredibly persuasive walking, talking androids. But what will they be?

First, I do not believe that they will have any conscious experience of the world or of themselves. They will not be subjects with a point of view. It is not just that they will not have immaterial souls. I do not believe that Gorillas have immaterial souls, but—whatever Descartes may say—I see no reason to doubt that they have a conscious experience. Not only do they act in ways similar to how conscious humans act, but they also do so by means of a brain, nervous system, and embodied existence that, while less complex than our own, is nonetheless of a similar ilk.

Yet, artificial neural networks are simulations of physical biological entities. There are no physical connections, only a computer program of ones and zeros that represents the equations of the neural network. I could run an artificial neural network with a pencil in a notebook, even if only with agonizing slowness. But these calculations would not be conscious any more than a student’s physics homework has gravity or a flight simulator flies. In Latin medieval philosophy, the word intellectus or “understanding” denotes a mind’s subjective and intuitive grasp of some reality, of the thing-as-the-thing-that-it-is—but is the functioning of an expert system, the diagramming of a sentence, or the bit-by-bit calculation of a neural network’s activation values truly a grasp of the relevant reality?

There is an extensive literature discussing these and other questions in more precise terms. They are hard to adjudicate in part because, although we learn more every day about how human mentality and experience will change in response to manipulation of brain activity, we do not know why there is an experience in the first place. That is, there is no scientific explanation for how neuronal activity produces conscious experience in animals and humans. Therefore, there is also no scientific or philosophical consensus that conscious experience is somehow exclusive to brains, or to neurons, or even to carbon (although I strongly suspect that it is). On these grounds, then, it remains difficult to define crisply why we ought to expect consciousness from vertebrates with brains but not from the complex chemical interactions and mathematical calculations that take place in a field of wheat, a thunderstorm, or the universe itself.

And yet there is another reason why we wonder about robots but not about thunderheads: We implicitly raise our expectations based on what we see of these from the outside. And this is the second reason why I would not call a robot a person: We can see the robot as a person only if we reduce personhood to behavior that we interpret as appropriate to a person.

That behavior is not the totality of human personhood or even necessarily of human intelligence was acknowledged by Alan Turing, originator of the famed Turing Test or, as he called it, the “imitation game.”[4] The imitation game sets a goal: a computer program that can converse in text such that we cannot distinguish the program and a human interlocutor. Turing’s Test is indifferent to the mechanism by which the program manages its feat—whether symbolically or non-symbolically. This is not really a test, then, of the programmed computer’s nature but of its accomplishment. Nevertheless, for Turing, such an accomplishment would warrant giving the computer the benefit of the doubt.

However, if we go further to treat this test as a definition, then our account of intelligence would edge toward behaviorism. That is, we would define intelligence without reference to any inner life but only as a tendency to exhibit certain observable behaviors under certain conditions. Like the Turing Test, behaviorism remains agnostic about the realities underlying these behaviors. And thus “intelligence” would be redefined as a capacity or tendency for intelligible conversation—especially the kind of conversation that I am expecting to have.[5] Indeed, Arthur C. Clarke merrily invoked Turing to “sidestep” the question of computer “thought,” rendering those who opposed the claim mere “splitter[s] of nonexistent hairs.”

This redefinition has become a basic (and incredibly useful) assumption of contemporary work in intelligent robotics. Computer scientists Stuart Russell and Peter Norvig (the latter a director of research at Google) define the “rational agent” as one that “acts so as to achieve the best outcome or, when there is uncertainty, the best expected outcome.” Applied to robots, rationality refers not to how some behavior comes about (much less to any interior life) but simply to the success of the behavior as interpreted by us humans. Such robots may employ symbolic methods, syllogistic inference, statistical analysis, neural networks, or preferably some appropriate combination of all of these and more—as do today’s self-driving vehicles. Here, then, with historian Yuval Noah Harari, we may (re)define “intelligence” rather thinly as “the ability to solve problems.”

This is entirely appropriate to talking about “intelligent” robots, but what happens when we turn around and begin to think of humans in this manner? In fact, a similar behaviorism is alive and well in some contemporary philosophies of mind. The “intentional systems theory” of Daniel Dennett proposes that I will tend to attribute subjective beliefs and desires to a thing when the best way in which I can reliably predict that thing’s behavior is to attribute to it the intentionality that I attribute to myself. This is why I cannot help but ascribe intentionality to other human beings.[6] This position, which somewhat echoes Turing’s “imitation game,” seems rather uncontroversial—it is also why we jump at shadows and feel empathy for robots—but Dennett wants to go farther, to say that, when we make this attribution to humans, behavior prediction is all we really mean by it in the first place. That is, our language about human subjectivity is not actually about an inner life; it really is just about the sort of outer behavior that we expect. The “self,” the intentional subject acting from beliefs and desires, is “an abstraction [that] one uses as part of a theoretical apparatus to understand, and predict, and make sense of, the behavior of some very complicated things.” The inner workings of that intentional subject, including her consciousness, in the end change our meaning not at all. Or as Dennett succinctly allows: “necessarily, if two organisms are behaviorally exactly alike, they are psychologically exactly alike.” Thus Dennett effectively makes a supremely rigorous Turing test into a definition of our language about intentionality. Appropriately-behaving robots could be called intentional subjects, with a meaning identical to that with which we apply such terms to human beings.

And yet, intuitively, can this really be what I mean when I say that I believe or desire or know this or that? For I am describing my own inner life and not just a schema by which to classify my own outward behavior. So too when I say that I am married to a person who loves me, it really and truly matters to me what she thinks of me, and not just how she behaves toward me. Her subjective experience of me matters. It matters that my wife gives herself to our life together, that the life we share encompasses our interiority. This would be impossible with a robot. And so I cannot see robots as intentional subjects in the most meaningful significance of that word.

What Will We Make Ourselves Become?

Very well. What of ourselves in a world of robotic caregivers and companions? I see no reason why such a world will not eventually be ours. A character in Robert Heinlein’s novel Time Enough for Love quips: “Progress doesn’t come from early risers—progress is made by lazy men trying to find easier ways to do things.” We replace human activities by technology when those activities are hard. Current discussions surrounding smartphone and tablet use by children exist because handing a five year old a tablet is more convenient than setting aside time to talk to that five year old—especially when the baby is crying or a project is due. How much easier it will be to give the bulk of child-rearing over to robot caregivers who will seem to provide everything that we would provide, but without our foibles?

Consumers of Behavior?

But to make robots interchangeable with humans could do something very dangerous to us: it could train us to become consumers of others. What I mean is this: We will always treat androids as tools because we will (rightly) see their behaviors as products for our consumption rather than as expressions of a personal life with self-possession. But how might this affect our treatment of people in general? Consider the forces shaping the robots’ behavior. The robots among us will be manufactured because they will sell; and they will sell because they will do the things and act in the ways that consumers want a purchased assistant or companion to act. Recently, a man married a sex robot. It does not walk; it barely talks; but it does simulate certain aspects of intercourse. The sex robots of tomorrow will be domestic companions, able to read and rock climb with us as well as join in other vigorous activities. They will behave as we would hope they would behave when confronted with our emotions. They will not be seen as sex-toys but as boyfriends and girlfriends, friends and spouses who will push you to new heights—heights that you will have selected from a list of options for self-improvement.

However, precisely by not failing in their behavioral simulacrum of owner-determined desirability, these companions will never force us to expand our own view of how a person might be, as real human relationships and human friendships can. They will not vex us or force us to develop our compassion, to re-evaluate who we are, nor even to think beyond how we want them to make us think. You would not buy an app to turn your domestic companion into a bedridden invalid who requires your heroic self-gift even when you feel disinclined to give it.

There is another difficulty: Even though we will treat our never-challenging android companions as consumer products, we will not instinctively differentiate between androids and humans. Dennett is right in this much: we will not be able to avoid feeling that these companions are intentional subjects as we are. And so, acting as consumers of agents whom we cannot but feel are persons, we will learn to be consumers of behavior in general—including the behavior of other human beings.

What, then, when other humans do not conform to our expectations and desires? Is it possible that we will no longer see this as a glimpse of a wider array of humanity? That we will not struggle toward a charitable response? Perhaps instead, we may come to think of these others as simply faulty human beings, viewing them with the same sort of idle dissatisfaction that we would a robot that did not deliver the set of behaviors and reactions that we wanted to consume.

Depersonalizers of the Body?

But if we do not wish to become consumers of behavior and yet are habituated to see behavers as “at our disposal,” then might we have to ignore the bodily presence of other humans in order to avoid thinking of them as we think of the androids?

Human beings are bodily, and this affects the mode of their presence to one another. That is, a body in the room cannot be willed out of existence, and when that body is a person rather than a chair, it demands some response. I have to adopt a stance toward that presence: I can receive the other person; I can (attempt to) ignore him or her; but I cannot control the fact of his or her presence. However, in order to see our android companions’ behavior as a consumable, we must see the android as a behavior-delivery machine. That is, we shall have to unlearn our instinctive tendency to count its physicality as presence if we are comfortably to take possession of it as property. But what is appropriate toward the physicality of robots will come at a severe cost to our interactions with actual human beings. If the physicality of android consumables teaches us to treat all humanoid physical presence as a commodity, then we will be trained to be slaveholders.

What will happen when we are no longer able to receive another physically without simultaneously taking possession of that person? In Isaac Asimov’s book The Robots of Dawn, video telecommunication has rendered nakedness of no consequence in remote viewing, but has rendered unbearable true physical co-location, even clothed. If the wide use of androids may incline us to see the body as an instrument of delivering commodified behavior, then could it go so far that the physical presence of another human would become not unbearable but rather something that we would have to ignore in our desperate bid to resist commodifying that person?

We may learn defensively to consider as impersonal another person’s physicality. That is, in order to maintain the difference between another human person and a robotic entity that we own, we may learn to treat other humans’ physicality simply as someone else’s inaccessible property, something to be held at a distance, rather than a personal presence to be respected and received. Perhaps a society saturated by androids will reinforce what the sexual revolution and widespread pornography have cultivated—what Wendell Berry describes as the compensatory trepidation of “sexual self-consciousness, uncertainty, and fear” between men and women, where even “eye contact, once the very signature of our humanity, has become a danger.

The Contemporary Scene

Is the future inevitably to be so bleak as this? I should hope not. However, our present offers both hints of such a future and indications that it need not be so.

Long before persuasive androids, mundane artificial intelligences will continue our training as consumers of the world. In the smart houses of tomorrow, my whole world and experience will obey the principle that my environment and companions (speakers, doorbells, kitchens, and phones) will deliver what I expect (or what I am trained to expect) to pay for. In such a world of easy and confirmed expectations, will I forget that my view of myself and of others is not the horizon of the possible or the good? And what of my own horizon? Amazon’s suggestions are based on my own browsing and purchase history, correlated by AI with the purchases of others whose histories are similar to mine. This does not jostle us beyond the groove into which we have settled; indeed, it smooths and trains us to fit into a groove that an aggregation of co-consumers makes for us. Education into human community is a similar process, but market-driven stats are a poor educator and a poorer community.

Having started out seeking to develop an artificial intelligence that could rise to human levels, the risk is that, as AI rises in its abilities, human subjectivity—increasingly integrated with these AI systems—will begin to be flattened, diluted to the level of what those systems are capable of representing, or of what is rewarded in them by consumers. This is too narrow a frame of reference—but it is not a frame of reference to which we must allow ourselves to be limited. Without at all needing to sound curmudgeonly, we might echo the advice that is becoming more common already: recommend that children play outside, that smart devices be limited within the home, that we read up on the history of the music chosen for us now and then, that we force ourselves to learn how to cook something. When self-driving vehicles arrive, we shall be faced with the same choices that we have on an airplane: we shall have to actually read a book and not simply browse Facebook or watch Netflix. And, above all, when the androids come for us, we shall also have to foster empathy for them and not become inured to them—not as a statement about their humanity but as an exercise of our own, even if one that is inevitably misplaced. And let us ask these companions to remind us to cultivate friendships with real humans. For when we live in a world that does not adapt itself to suit our expectations, we are challenged. When we meet people whose responses are not customized and tuned to us, then we grow. Character and virtue advance by living in human community. And therefore, we shall have to foster in our culture a sense of striving for these higher things, for this fuller humanity, both with and without our artificial companions. And we must foster this with urgency.

The word “artifice” means handiwork, work of skill. Artificial Intelligence is a wondrous work. And yet, if we allow that artifice to become our reality, to form our lives and our children, then we will become artifices ourselves, handiwork of our handiworks.


[1] For the sake of clarity, I silently pass over the distinctions that such as Russell and Norvig draw between AI as human action (e.g. the Turing Test in the 50’s), AI as human-like thought (e.g. Newell and Simon’s early work with symbolic representation in the 60’s, leading to the field of cognitive modeling), AI as rational deliberation (e.g. logicism and expert systems in the 80’s), and AI as rational agency (e.g. intelligent robots). However, when intelligent robots attain to the persuasive simulation of humanity, then we shall have re-converged with the mind-agnostic standard of human-like behavior, and whether or not the interior workings are human-like or even “rational” in any sense will no longer matter, so long as the action be interpretable as such (or at least as desirable) by human consumers. That situation is the subject of this essay. For introductory discussion of the distinctions between these sorts of AI, see Stuart Russell and Peter Norvig, “Introduction,” in Artificial Intelligence: A Modern Approach, 3 edition (Upper Saddle River: Pearson, 2010), 1–33.

[2] See Aristotle, An. III.4; Metaph. I.1; Phys I.1.

[3] E.g. Thomas Aquinas, Summa theologiae 1.16.3; or De Veritate 1.1. One’s apprehension of the world is not just a symbolic representation of an account of it, but is a world-conformed habit of mind from which such accounts and their representations are generated. One’s capacity for understanding is shaped by one’s experience and one’s memory, and goes with one in one’s every experience.

[4] A. M. Turing, “Computing Machinery and Intelligence,” Mind, New Series 59, no. 236 (1950): 433–60. Turing writes that, faced with the behavioral accomplishment, “the original question, ‘Can machines think?’ . . . [becomes] too meaningless to deserve discussion;” Turing, 442. As for consciousness or point of view being part of what humans are up to when they think, Turing writes that such questions ought to be no barrier to saying that a machine thinks, simply because the demand that something be proven conscious is not a demand that we impose on human interlocutors, lest we become solipsistic about it. Therefore, he carefully concludes, “I do not wish to give the impression that I think there is no mystery about consciousness. . . . But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper;” thus Turing, 447. What he does not say, and what I consider rather important, is that common human discourse concerning “thinking” or “intelligence” has heretofore involved an implicit reference to a conscious, subjective dimension to that thought and has not defined “thought” merely in terms of achievement.

[5] This is how behaviorism, whether applied to human beings or to robots, is also open to a creeping egocentrism. Because it relies on me being persuaded by this conversation, it defines intelligence in terms of my own expectations.

[6] It is also why children ascribe intentionality to unfamiliar natural phenomena and, Dennett argues elsewhere, why humans came to believe in God, by attributing intentionality to the flow of natural events. See Daniel C. Dennett, Breaking the Spell: Religion as a Natural Phenomenon, Reprint edition (New York, NY: Penguin Books, 2007), 118–20.

Featured Image: Chosovi, Igor Mitoraj statue in Valencia, 2006; Source: Wikimedia Commons, CC BY-SA 2.5.

Author

Jordan Wales

Jordan Wales is an Assistant Professor of Theology at Hillsdale College. His research focuses on early Christian understandings of grace and the vision of God, as well as contemporary questions relating to AI. He received his M.T.S. and Ph.D. in Theology from the University of Notre Dame after earning a Diploma in Theology from Oxford and a M.Sc. in Cognitive Science and Natural Language from the University of Edinburgh.

Read more by Jordan Wales