It seems as though the perspective that generative AI is an innocuous, morally neutral tool could decrease in popularity, with alarming articles like this recent New York Times piece raising questions that cannot be easily dismissed. As far as I’m concerned, this shift of perspective is a welcome one. The “AI revolution” never conjured up for me any kind of positive feelings. I have heard critics of my pessimism cite the likes of Tolkien who “was very wise, in his way, but was also extreme, criticizing basic advancements like combustible engines or industrialization,” which is odd to me. Citing Tolkien’s dire predictions as a support for unfettered, Babel-like technological pursuits is like scoring a goal on your own team and then celebrating. Time has vindicated, rather than debunking, Tolkien and his antipathy toward “the modern machine.” He is right, and we have impoverished ourselves in our enrichment. I stand with the hobbits.
Of course, I’m not the first person to make the observation that any new technology comes at a cost. We are not accustomed to thinking like this, in a techno-culture wherein all of reality is mediated through our technologies. But just because we aren’t aware of it, doesn’t make it true: new technologies are always a trade-off. They come with the ability to say, “we now get to…” and “we no longer have to…” but the fine print that we collectively ignore always includes a “we no longer get to…” The classic example of this kind of technological trade-off is Plato’s Phaedrus, which explores the costs and benefits of introducing the written word to a society.[1] In this case, I think it’s obvious that the benefits outweighed the cost, but the point is that we aren’t even running such a calculous with our new technologies in general, and AI in particular.
It is not uncommon to hear someone cite the ground-breaking medical advancements that AI will—and already has—given to the medical community as an automatic justification for the Wild-West situation we are currently living in. But this is grossly presumptuous. The medical benefits of AI are not the end of the debate but are rather one argument in its favor; it is a single plank in an argument that we should allow AI in some way but does not even address the potential costs of AI or the question of how it should be incorporated into society if at all.
What are these costs of which I speak? Where to begin? Shall I count the ways I hate thee, AI? Let me limit my reflections to areas of immediate concern to myself.
In my world—the world of theological education, populated by pastors and professors and students—generative AI has been, and continues to be, an unmitigated disaster. The engine that this world runs on is ideas: whether in terms of formation or distribution, we run on thoughts. And not just any thoughts. Thoughts that wrestle with and dwell on the Good, the True, and the Beautiful. Some might crudely categorize this objection I am here approaching as the “plagiarism problem,” but that doesn’t quite say enough. Yes, of course, we are concerned about students and teachers and preachers passing off work that isn’t theirs as if it is. Doing this is dishonest, deceptive, lazy, and dishonoring to God, regardless of whether the work in question was stolen from another human or generated by a robot.
Some will no doubt push back at even this minimal point, insisting that these matters demand we conceive of a continuum, where some work “generated” by AI is alright, while other work isn’t. Should I have AI write my sermon entirely, down to the personal illustrations? Maybe not. But should I have AI check for typos and grammatical errors? Probably. So, if those are the extremes, what do we do with the grey in-between? Can I have AI turn a manuscript into bullet points or vice versa? Can I have AI change the tone or style of my manuscript to sound more or less relatable? More or less scholarly? More or less casual? These are the tough and important questions we must wrestle with!
Except, they aren’t. Not even close. Because this continuum assumes that the primary query needs to be how much we are allowed to lean on the technology rather than the nature of the technology itself in relation to the user. And questions surrounding that topic are far more important than the mere question of excess. To outsource what we should be doing ourselves to a machine is not merely an offense against the listener or reader, it is a disservice to the user. For example, if a pastor or teacher or student or author gave me a disclaimer that he had used AI to help generate such and such content, I would still have concerns. I would not feel lied to or deceived, of course, but the kind of person who leans on AI in this way is suspect as a thinker. Is that too harsh? I don’t think so.
This point is difficult to communicate in our age, since as a people driven by pragmatism and efficiency, we are used to thinking strictly in terms of effect. If AI can help me communicate the same truth, and do it better than me, why wouldn’t I use it? But this way of thinking doesn’t take into account the kind of creatures we are. We are formed by the formation and communication of ideas. This world of which I speak, in other words, the world of theological education (and, I would say, the way of contemplation more broadly) is not a world in which the ends justifies any means. The means is where the magic happens. We are after the formation of the soul, and this does not occur simply by having ideas injected into brains like a computer program. The soul is formed through intellectual and moral virtuous habits. The whole self must be engaged in the enterprise of contemplation, and reliance on generative AI invites, if possible, even less involvement than our post-Enlightenment context has insisted in recent history (which is already too minimal, since it insists that one’s thought life can be abstracted entirely from one’s lived experience).[2]
And this point highlights the error of another argument in favor of generative AI. This is the argument that depicts AI as an army of research assistants. What’s wrong with this argument? Well, in the scenario wherein professors and authors work with research assistants, we are talking about a phenomenon of mutual human enrichment. The professor is enriched and formed by the human intellectual labors of his assistant, and the assistant is enriched and formed by the research he performs for his professor.
This dynamic can be malformed, of course, like when a research professor isn’t benefiting from his assistant in a humane way, and instead functions as a tyrant. Some published researchers are frauds, riding the coattails of unacknowledged students. However, this is the perversion of a mentor-mentee relationship that, in its natural state, enlivens and positively forms everyone involved. But something like the inverse is the “natural state” for AI-as-personal-researcher. The medium itself leans in the direction of stripping the user of the kinds of thought-work that should form him in his pursuit of the Good, the True, and the Beautiful. In other words, the natural condition of AI in the world of writing and research and thinking is to relieve the user of intellectual formation, rather than to stimulate it in human-on-human edification. AI is better, no doubt, at being efficient. But “efficiency” is only one criterion for determining the usefulness of an endeavor, and a rather thin one at that. In sum, the increasing use of AI is not humane. It is, in a very real sense, dehumanizing.
This came home to me in a tragically humorous way the other day when I watched two back-to-back ads: one was advertising for an AI-powered program that will turn your basic idea into a full-fledged book, the other was advertising for an AI-powered program that will read and summarize an entire book in a 10-minute-long audio summary. I couldn’t help but imagine these two products interacting with one another: a blur of technological koinonia with no human thread, save the initial prompt to write the book and the initial prompt to summarize said book. Machines talking to machines, with only the most minor passive involvement from mankind. And suddenly, the premise of Fahrenheit 451 doesn’t seem so ridiculous.
As such, the “AI is here; fighting is futile, we need to learn to make this work to our advantage” take is one I find altogether uninspiring. I would much rather lose a fight worth fighting than roll over and let the fight end before it begins. Widespread intellectual malpractice doesn’t cease to be just that—malpractice—because it is popular. As it turns out, this kind of “practical realism” when it comes to “making peace” with the AI revolution is what lies behind some concluding that we should get people acclimated to AI as early as possible so that they will know how to work with it. Such a suggestion suffers from the same fatal flaw as parents who give their small children (and pre-teens, and maybe even teens) smart phones in the name of “preparing them for the real world.” In both cases, the problem users will face is not—and will never be—the problem of ignorance on how to use the technology. The trick will never be trying to figure out how to get people acclimated to these technologies, the problem will always be trying to figure out how to get them to stop. Overuse is the persistent danger. Parents will not set their kids back a single minute, in terms of knowing how the technology works, by keeping them from getting access to smart phones for as long as possible. This is because smart phones—and generative AI platforms—are designed in such a way to get users hooked as quickly as possible. They are “user friendly” (which has to be one of the starkest euphemisms of our day, though it is seldom recognized as such; these technologies are “user friendly” in the same way that barbed fishing hooks are “trout friendly”).
Now, keep in mind that none of what I’ve said so far has given any weight to the fever dreams of techno-idolaters (or the nightmares of those in dread of them). Such fantasies would include the annihilation of the human race, posthumanism, and the “singularity”—that moment where AI becomes sentient and enters into a truly “I-thou” relation with humans, ultimately leading beyond this relation to a monism where humanity is swallowed up into the machine. These things will not happen. For Christians, this is obvious because we have an eschatology. We know how the human story wraps up, in broad brushes, and it doesn’t wrap up with the plot of the Matrix movies being lived out.
But even apart from this stark fact, these delusions are predicated on a shallow and incoherent metaphysic, and a total ignorance of what makes man man, and what makes his rational intellect and will a rational intellect and will. The idea of the “singularity” is not simply a technological impossibility; it’s an ontological one. Lonely individuals who turn to AI “companions” are no different, in this regard, than the confused and deceived people caught up in transgenderism. Both noxious behaviors constitute ontological rebellion to tragic effect. Treating AI bots as rational beings and calling oneself the opposite gender each have this in common: they resent and resist nature in general, and human nature in particular, and are thus an exercise in self-delusion. They declare war on gravity whilst jumping off a building. So, no, I am no more afraid of the moment of “singularity” than I am convinced that “Elliot” Paige is a man.
And besides, fretting about such hypothetical future scenarios seems to take for granted the wrong calculus for weighing the usefulness of new technologies. Namely, the survival calculus: if this technology will lead to human annihilation, we should resist it. But what if the stakes aren’t that high? Can I take for granted that these utopian (or dystopian) promises are a pipe dream, and still insist that widespread use of generative AI will be bad for society?
Now, lest I put my reader at too much ease, let me hasten that while I am not afraid of these doomsday predictions, I am fully convinced that much of the AI shenanigans we have been witnessing have a far more nefarious source. No, I am not about to make the crudest expression of the “demons are in the AI” argument, but if I’m honest, my own position is not that far from it. Does this suggestion contradict what I just said about AI lacking a rational intellect or will, in the truest sense? Not at all. I’m not prepared to say that AI are demons. But there’s nothing at all in my theology or philosophy to prevent me from concluding that those who commune with AI bots are not merely delusional. In addition to being confused and self-deceived, calling a dead and lifeless idol a living, breathing god, they are also thereby communing with demons.
The fact is, in any given phenomenon, we can extrapolate natural causes. And we might even be right to do so. But does that mean that these natural causes preclude supernatural ones as well? I can’t imagine why we would have to conclude that they do. This kind of causation is not zero-sum, after all, and the precise relationship between the seen and unseen realms is strange—beyond our current contemplations.
This is a good truth to remind ourselves when we are reading Holy Scripture. We take it on faith, for example, that the demonic possessions we read about in the Gospels were supernatural encounters, despite the fact that modern people would analyze these episodes phenomenologically and attribute natural causes to them. Because Scripture calls these episodes “demonic possession,” we accept it, and so we should. But does this mean that the natural causes are incorrect? I’m not convinced that conclusion would necessarily follow. Again, supernatural causation and natural causation need not compete in every instance.
But many Christians today uncritically adopt a kind of chronological naturalism. They are not entirely naturalistic in their outlook. They read in their Bibles experiences described as “demonic possession” and they accept this as a satisfactory explanation for why these people are behaving the way they do in the New Testament. But jump forward two thousand years and see or hear the same kind of behavior described in Holy Scripture, and the default assumption is, “This person is crazy” or “This person must be on drugs.” But why can’t they be all the above? The basic assumption seems to be something like, these kinds of things happened in the Bible times, but not today. This, however, is a bad assumption. We live in the same world as the biblical authors—the world didn’t somehow grow into a naturalistically governed one with time. Nature and supernature—the seen and unseen realm; the material and immaterial; the physical and the spiritual—these realities have always been more porous and mutually impactful than we tend to assume.
Consider this. If you could go back in time and observe Corinthian idolators in the first century offering tribute to little stone images, what would you conclude? One option is to view their strange behavior as embarrassingly pitiful. You might say, “These people are so delusional—they think they are doing something meaningful, but that little idol is totally lifeless.” Or, on the other hand, you might say, “They are communing with demons.” As it turns out, Paul would have said—and, in fact, did say—both. He can say, simultaneously, that an idol is “nothing” (1 Cor 8:4), and yet that when one joins in idolatrous feast-gatherings, one is “participating with demons (1 Cor 10:19-20). It’s not the raw material of the wood or stone that makes an idol a means of demonic communion—the same material has been used to build churches, for example. But they can be mediums and channels for communion with real nefarious intelligences. What makes us think that isn’t or can’t be the case today? Having just returned from a brief visit to India, where actual physical idols are unabashedly worshipped today, I can confidently say that what Paul spoke of to the Corinthians still occurs. I am suggesting that what the Corinthian pagans did and what Hindus do—that is, commune with demons through their worship of dead and lifeless idols—many do today with AI.
Of course, we know that the earth and its fulness belong to God (Psalm 24), and in that sense, nothing in this world is intrinsically evil. This is as much true for digital realities as it is for physical ones. But that need not prevent us from suggesting that some corners of the earth are more susceptible for corruption than others. Generative AI—and in particular, AI chatbots—seem to be a uniquely open channel for direct demonic influence, similar to what we might say about psychedelics.
Which is all to say, I’m not at all convinced that the best strategy when hearing from people of their transcendent or spiritual or emotionally fulfilling experiences with AI should be met with a dismissive handwave. We can explain their experiences by pointing to their natural causes, sure, but does that preclude any kind of supernatural causation? Must their experiences be reduced entirely to their natural causes? When those who proport to have extra-terrestrial experiences related to UFOs, or communication with interdimensional beings through psychedelics, or the same thing through AI relationships with bots, we might be tempted to explain these experiences away by appealing to any number of natural causes. And these explanations may even be partially or completely correct at the material level. But this may not be our best way of dealing with the problem at hand. It might be better, and more robustly accurate, to say, “Sure, it’s possible that you did have these experiences. But when you say ‘interdimensional being,’ I think it’s pronounced ‘dee-mon.’”
Share
[1] I think I got all this from Andy Crouch somewhere, but I can’t remember where. [2] For more on the errors of this kind of intellectual abstractionism, see my article, “Staring at the Sun.”