i

Going Peopleless Underestimates the Unique Superiority of Human Intelligence, Part 1

Gregory E. Reynolds

Where is the wisdom we have lost in knowledge?
Where is the knowledge we have lost in information?
The cycles of Heaven in twenty centuries
Bring us farther from God and nearer to the Dust.
(T. S. Eliot[1])

I praise you, for I am fearfully and wonderfully made. Wonderful are your works; my soul knows it very well. (Psalm 139:14)

Your hands have made and fashioned me; give me understanding that I may learn your commandments.” (Psalm 119:73)

As you do not know the way the spirit comes to the bones in the womb of a woman with child, so you do not know the work of God who makes everything. (Ecclesiastes 11:5)

Introduction

Last year was the twenty-fifth anniversary of a very popular movie that I never saw when it came out in 1999—The Matrix.[2] So, recently I decided that, given frequent references to the movie in literature, it was significant enough to watch, which I did in June 2024. I realized that I took the famous red pill in 1990 when I began to study Marshall McLuhan and Neil Postman and the discipline that they initiated—media ecology. In The Matrix the red pill enabled a person to experience reality by escaping the matrix, which was an illusory, deceptive reality created by an evil cyber-intelligence. The red pill of media ecology revealed to me the electric matrix into which I was born—the electronic environment. It is an eye-opening experience.[3] Like Plato and his allegory of the cave, it became my mission to get the cave dwellers out into the light of day to understand the pervasive and subtle influences of the electronic environment on human life and culture. McLuhan referred to the cave dwellers as “technological idiots.” The term “idiot” is only apparently uncharitable. The original Greek word (ἰδιώτης, idiōtēs, 1 Cor. 14:24 “unlearned,” 2 Cor. 11:6 “untrained in speech”) indicated ignorance of a particular language and thus its culture. The point is that, as a culture, we are largely ignorant of what we are doing with media, or more precisely, what the media are doing to us. That too was McLuhan’s point—technological ignorance—something that needs to be overcome.[4]

Zion, in The Matrix, is the place of refuge from the deceptive matrix to which some humans escape after the machines enslave the rest of humanity and harvest their bioelectric power. Zion is the place of exile, an underground refuge, for free humans. It reminds me of the church; we are a people in exile awaiting the return of our Savior, the ultimate escape from the matrix of this present evil age. In Christ’s church we become fully human as we are transformed, imitating the Second Adam, extricated from the killing grasp of the matrix of this world.  

I believe that STEM and the humanities should have a symbiotic relationship. A critical analysis of the electronic environment along with all the inventions (extensions) of man must be imbedded in the humanities, which includes Christian theology, along with technological understanding. I intend to bring the discipline of media ecology (ME) to bear on a critical analysis of artificial intelligence (AI). Artificial intelligence is the capacity of computers to exhibit or simulate human intelligent behavior, especially by applying machine learning techniques to large collections of data.[5] I will expand on this below.

ME was conceived in the intellectual environment of the liberal arts, reflected in the Judeo-Christian sensibilities of its founders, such as Marshall McLuhan (Roman Catholic), Neil Postman (liberal Jew), Jacque Ellul (Neo-Orthodox Protestant), and Walter Ong (Jesuit). I am enlisting Christian anthropology and the biblical idea of idolatry as a critical tool to bear on my consideration of this modern (not actually new) technology. AI is inextricably connected with the entire digital revolution. But the most important questions about it are not technical but anthropological.

The very name “artificial intelligence” reduces human intelligence to something like information retrieval and organization. Commercial answering services regularly greet customers with, “I am your virtual assistant. I can understand complete sentences,” that is until I ask the question that did not appear in the website FAQs; then suddenly it sends me off to a human agent. But there is no “I” behind these words; this is actually a lie creating an illusion of a computer program being human.

Recently my wife and I were on a ski trip in the White Mountains of New Hampshire. We stayed at the Woodstock Inn and Brewery for three nights. As winter would usually have it, a large snowstorm was predicted for our day to travel to Bridgeton, Maine, to ski with family. Google maps had recommended traveling over the Kancamagus Highway, a route that was closed in winter until 1968. There was time to take that route in the morning to avoid the storm. I knew a slightly longer route that was well traveled at all times of year and did not run to as high an altitude of the Kancamagus Pass (2,855'). It only rises to nineteen hundred feet. I figured that if we could travel ahead of the storm, the Kancamagus Highway would be fine. When I asked a local, she advised me not to ever take the Kank in the winter due to severe frost heaves which make for a miserable drive. Google had never driven that road in winter and usually only chooses the fastest route—efficiency is one of technology’s chief gods. This also reminded me of the reality of human intelligence functioning in an embodied existence.

My argument in this article is that human intelligence (HI) can never be replaced by computer systems for three reasons. 1) computer data and their algorithms are created by human intelligence. 2) Human intelligence is embodied intelligence that requires physical presence in the world of ideas, matter, and people. 3) Human intelligence involves consciousness, cognition, and conscience, none of which can be reduced to material or digital realities. Humans are creatures of God before they are intelligent thinkers. This creatureliness involves more than reasoning power.

Artificial Intelligence: A Brief History

Discussion of AI has broken like a tidal wave over all media outlets as if it was something new. But it was conceived almost seventy-five years ago with Alan Turing’s now famous paper “Computing Machinery and Intelligence.”[6] Computers at the time needed a great deal of development in order to attempt to answer Turing’s question about computing, “Can machines think?”[7] Turing anticipated the development of computing in the direction of simulating human intelligence, but he simply posited a test to compare the two.[8] Large amounts of computer memory, especially, needed to be developed. Five years later the first computer program was invented to mimic the problem solving skill of a human; Logic Theorist was created by Allen Newell, Cliff Shaw, and Herbert Simon. A year later in 1956 a historic conference attempted to inspire a collaborative effort among top researchers to develop AI. The Dartmouth Summer Research Project on Artificial Intelligence resulted in a scientific consensus that AI would be achievable. This was where the term “artificial intelligence” was coined. The conference was the catalyst for the next two decades of AI research.[9]

Computer scientist Joseph Weizenbaum (1923–2008), inventor of the first interactive computer program (AI), ELIZA (named after the Pygmalion character), was surprised to see “the enormously exaggerated attributions an even well-educated audience is capable of making, even strives to make, to a technology it does not understand.”[10] In 1970 cognitive and computer scientist Marvin Minsky (1927–2016) told Life magazine that in less than a decade there would be a machine that equaled the intelligence of an average person.[11] Such was the optimism of the philosophical materialist, who also overestimated the progress of computer technology.

In the 1980s the concept of “deep learning” was promoted by John Hopfield and David Rumelhart developing “techniques which allowed computers to learn using experience.”[12] In 1997 reigning grand master chess champion Gary Kasparov was defeated by IBM’s Deep Blue computer program. In the same year Dragon Systems developed speech recognition software. Finally, computer memory, which according to Moore’s Law doubles each year, along with the availability of huge amounts of data from various sources, allowed for great advancement in the field of AI.[13] The author Rockwell Anyoha in his 2017 “History of Artificial Intelligence,” accurately predicted that “AI language is looking like the next big thing. In fact, it’s already underway.”[14] On November 30, 2022, the first commercially available AI, OpenAI’s ChatGPT, became available, and the conversation about its benefits and liabilities began.

In the spring of 2023, technology columnist for The Wall Street Journal, Christopher Mims, observed,

What’s unique about this moment is that new systems like text-generating AIs, such as ChatGPT . .nbsp;. are the first consumer applications of AI. . .nbsp;. What characterizes this time is that computers—rather than humans—are now building the models that machines use to accomplish a task.[15]

In 2024 Cal Newport, assistant professor of computer science at Georgetown University, observed, “If 2023 was the year when we learned that language models could do more than simply mix and match existing text, then this year might be when we learn that the power of linguistic AI is nonetheless still limited.”[16] In the summer of 2024, researcher Dean Ball observed a new stage in AI development,

Today’s language models operate in a broadly similar way, except that they predict the next word rather than the next character. (Actually, they predict a sub-word linguistic unit called a “token,” but “word” suffices for our purposes.) The basic theory behind scaling language models further — and spending hundreds of millions, even billions, of dollars to do so — was that, with more data and larger neural networks [neuron-like adjustable connections between computational units], models would learn increasingly sophisticated heuristics and patterns that mirror human intelligence.[17]

Before ChatGPT, most language models truly were next-word predictors. To prompt those models, one needed to give them a starting sentence and ask them to finish it: “Once upon a time, a brave hero . .nbsp;.” These earlier models could be fine-tuned to make them more conversational, but they had a tendency to exhibit toxic behavior or gradually veer off into mirroring the tone of a Reddit commenter rather than a helpful AI assistant. What made ChatGPT a breakthrough consumer technology was a new step in the model’s training process: reinforcement learning from human feedback (RLHF).[18]

The recent surge in AI development has led to much speculation about the dystopian and utopian possibilities of AI. Both make the materialist mistake of underestimating human intelligence.[19]

What Is Artificial Intelligence?

Artificial intelligence (AI) is a general category referring to the larger field of computerized simulation of human intelligence. Artificial general intelligence (AGI) aims at, but has not yet achieved, human-level general intelligence and possibly beyond, including reasoning, think abstractly, learning from experience, and adapting to unfamiliar situations, much like a human. Artificial special intelligence (ASI) is narrow AI applied to specific tasks like dimming headlights before oncoming cars or Siri and Alexa.

Generative Artificial Intelligence (Generative AI) is the form of AI we are especially concerned with, since it presently exists and through research and development is continually improving. It “refers to a category of AI models designed to create new content, such as text, images, audio, video, and more. Unlike traditional AI models that primarily analyze or classify data, generative AI can produce original outputs based on the patterns it has learned from training data.” It “mimics human creativity.”[20]

The GPT in ChatGPT stands for Generative Pre-Trained Transformer, which is a commercial type of Generative AI. It is a word pattern recognition machine that is capable of “learning” from human input. Associate director of the Data Science Program and an assistant teaching professor at Mississippi State University Jonathan Barlow defines it this way: “Generative AI is a species of AI designed to produce new content indistinguishable from human output.”[21]

Large language models (LLM) are the “intelligence” behind AI programs. They are focused on text generation and power a wide range of applications, including: chatbots and virtual assistants, ChatGPT, Google Bard, or other conversational agents. They can generate articles, stories, code, or even poetry.

When thinking about poetry Nikolas Prassas defines LLMs, large language models, as “essentially machines for choosing combinations of words.” He summarizes physicist and creator of the Mathematica Computing system, Stephen Wolfram,

These models are designed to solve one simple problem: predicting what the next word in a sentence is. They do this by breaking down sub-semantic units known as tokens. The tokens are converted into vectors, which give us an array of numbers, which in turn can be used to estimate the probabilities associated with each word as the machine moves along its sentence. Through judicious use of engineering voodoo, the system is then directed to find what Wolfram calls “the rational continuation” of the sentence. The system thus generates meaningful text by simply asking over and over what the next “expected word” is. And thus Peter Lax’s dictum is shown once more to be right: “If you can reduce a mathematical problem to a problem in linear algebra, you can most likely solve it.”[22]

So mathematical magic is behind ChatGPT. Cal Newport describes the process of ChatGPT,

None of this jargon is needed, however, to grasp the basics of what’s happening inside systems like ChatGPT. A user types a prompt into a chat interface; this prompt is transformed into a big collection of numbers, which are then multiplied against the billions of numerical values that define the program’s constituent neural networks, creating a cascade of frenetic math directed toward the humble goal of predicting useful words to output next. The result of these efforts might very well be jaw-dropping in its nuance and accuracy, but behind the scenes its generation lacks majesty. The system’s brilliance turns out to be the result less of a ghost in the machine than of the relentless churning of endless multiplications.[23]

Is ChatGPT truly Intelligent? No, it is based on human intelligence. Humans create and gather the data used by AI; and even more importantly, tremendous human intelligence is required for programming of large language models (LLM). But intelligence is far more than any LLM or computer program. I have a quarrel with the label “artificial intelligence.” Machine learning? That’s half right, but learning is rooted in intelligence, so no, it is a very sophisticated computer program, but not intelligent. That is something uniquely human, especially when we remember that our intelligence is embodied. So, as I have said above, artificial general intelligence does not exist and most developers understand that the work they are doing on generative AI is focused largely on specific applications. Transhumanists and others who believe that artificial general intelligence is achievable are in a different category. Former CEO of Google and executive chairman of its successor Alphabet, Eric Schmidt, opines, “A key marker of the shift to AGI will be AI’s ability to produce knowledge based on its own findings, not merely retrieval and recombination of human-generated information.” Schmidt goes on to hint that AGI may be possible to achieve, but admits the limitations of AI, when he says that two areas are ripe the advancement, mathematics and programming, but “Unlike biology and other fields that require real-world experimentation, these disciplines are largely self-contained.” He concludes, “Superintelligent systems will face inherent constraints. . .nbsp;. Just as human cognition is bounded by physical and biological limits, AI will remain subject to the limits of the physical world.”[24] These limits are precisely what differentiates human from artificial intelligence.

Weizenbaum was shocked over the response to his primitive AI program ELIZA because it was based on the idea of a psychiatrist who believed that therapy is simply a matter of decision making based on information processing. This was reminiscent for Weizenbaum of physical chemist and philosopher of science Michael Polanyi’s discovery that many modern scientists and educators believed in a “mechanical conception of man.”[25] This is the only explanation for people thinking of ELIZA as human.

Christians need to be engaged in the name game. The ways in which we describe and name things are means of understanding and control.

Yale computer scientist David Gelernter recently observed,

The biggest challenge AI faces today is to understand the human mind. The mind is capable of maneuvers that no AI system I know of has ever achieved. Future AI must learn to understand real human speech and writing (Watson made a strong start), which requires a thorough understanding of humor, irony, metaphor and many other twists on speaking and writing. Most important, AI must learn to understand emotion and emotion’s central role in human thought.

Some thinkers hold that software will never actually understand anything, because it will never be conscious. They’re right. Consciousness is a product of certain organic systems, not of electronic circuits. But software can act as if it understands—and that’s what matters in practice.

In time, AI will be able to chat with the lonely and sick around the clock. Will it replace humans who care? Of course not. But it will be better than nothing. Many of us will come to rely on AI-based digital assistants, small devices that murmur schedule notes or parking tips in our ears, patch in phone calls, read us emails and otherwise act like superhuman secretaries. Our understanding of the mind will be the basis of software models that will be widely (sometimes dangerously) used to predict a person’s future behavior.[26]

Recently I experimented with the latest free version of ChatGPT, asking the question: “In which Wit Stillman movie is Ecclesiastes mentioned?” It responded with “Metropolitan.” But I responded that it was not in that movie, as I had just rewatched it, although there was a philosophical discussion with references that might sound similar to Ecclesiastes. Then the bot responded, “You’re correct—there is no explicit reference to Ecclesiastes in Metropolitan. I misspoke . .nbsp;.” I then queried about the same content’s presence in the film Barcelona. In that film Ted Boynton reads the Bible and finds Proverbs and Ecclesiastes most helpful in dealing with romantic matters. My correction helped ChatGPT to answer my first question correctly in the future should anyone else make the same inquiry, but it is still simply adding information, not truly learning, which is a distinctly human activity. All the vast information available to it (not “I”) was first humanly produced. It is also deceptive to perpetrate the idea that a chatbot is a person, using words related to human intelligence, even though its producers know it is not. The voice recognition used by my TV tells me that it is “thinking” and “listening” when I make a query. It is not; it is simply a very convenient and complex program retrieval system based on word recognition and probability theory. If Weizenbaum’s primitive ELIZA seduced people into making this mistake, so much more do highly sophisticated programs promulgate what we might call the ELIZA syndrome today.

AI does not have a mind. Cal Newport concludes that despite the apparently startling “intelligence” of ChatGPT, its well-crafted prose is not the product of a mind. His New Yorker article “What Kind of Mind Does ChatGPT Have?” makes this crystal clear. He makes an important distinction: “It doesn’t create, it imitates.”[27] Here, besides chatbots not being embodied beings, they lack the invisible aspects of human intelligence: consciousness, cognition, and conscience, what we might call the trinity of human intelligence.

Furthermore, scientists like Michael Polanyi have pointed out the importance of imagination in scientific research, which similar in poetry as well as science. Michael Polanyi’s student, Nobel prize winner Dudley Herschbach,[28] assigns poems to be written by students doing frontier science. In other words, exploration is not about getting something right but about exploring by seeking what is new. As Albert Einstein famously observed,

Imagination is more important than knowledge. For while knowledge defines all we currently know and understand, imagination points to all we might yet discover and create. I rarely think in words at all. A thought comes, and I may try to express it in words afterwards.[29]

I do not believe that this mysterious human process of exploration can ever be achieved by AI because it involves the invisible trinity of human intelligence.

The Literary Limits of Artificial Intelligence

ChatGPT is capable of doggerel, a simple rearrangement of information, but is never going to be a T. S. Eliot or a Robert Frost. As literary scholar Nikolas Prassas observes, “To date, I have yet to see ChatGPT ‘generate’ any masterpieces, only doggerel of truly awesome inanity.” What poets do is the opposite of the process by which ChatGPT operates. Also, poetry “differs from other ways of using language.” [30] Philosopher and author Simone Weil’s (1909–1943) complex criteria for poetic composition require human perception and sensibilities, condition and values, and embodied lives that no AI system can ever duplicate.[31]

The essayist Joseph Epstein, who formerly taught a course in Advanced Prose Composition, reflects, “I feel fortunate in retiring from teaching when I did, in 2002, just before the current technotyranny set in.” He used to have students identify fifteen cultural figures or events, such as “the 1913 Armory Show, Reynaldo Hahn, the Spanish Civil War and Robert Graves.”[32] Few students, even before smart phones, could identify these. The electronic distraction and its tendency to move our thinking over thin surfaces was already at work. With cell phones they would now simply google the answers, but their knowledge would be superficial at best. Epstein quotes the author John Warner, whose book More than Words he was reviewing, “No, we do not and may never fully understand the mechanisms of the full range of our cognition, but this doesn’t stop us from recognizing that human thought is distinct from algorithm-produced syntax.”[33] Epstein observes,

A composition rendered by AI is certain to be grammatically and otherwise formally correct, lacking only in originality and an interesting point of view. Machines, Mr. Warner holds, cannot teach writing. “Only humans can read. Only humans can write. Don’t let anyone tell you otherwise.”[34]

Epstein concludes with an epigrammatic statement: “In learning how to write well, nothing can replace thoughtful reading, careful practice and an interesting point of view. Anything else is artificial all right, but a good deal less than intelligent.”[35] Ever the punster, Epstein happily sounds like a Luddite, “ChatGPT and other bots are ubiquitous, giving a whole new meaning to botulism.”[36]

In Part 2 we will explore the uniqueness of human intelligence; in Part 3 we will explore the benefits and liabilities of AI.

Endnotes

[1] T. S. Eliot, “Choruses from the ‘Rock’ I,” in Collected Poems 1909–1962 (New York: Harcourt, Brace & World, Inc. 1963), 147.

[2] For those who have not seen The Matrix see the Summaries at IMDb, https://www.imdb.com/title/tt0133093/plotsummary/?ref_=tt_stry_pl. Better still, watch the movie.

[3] As I was matriculating in the doctor of ministry program in 1990 at Westminster Seminary in California I took a course by Joel Nederhood, “Effective Preaching in a Media Age,” which introduced me to the discipline of media ecology pioneered by scholars like Marshall McLuhan and Neil Postman. Postman’s 1985 book Amusing Ourselves to Death: Public Discourse in the Age of Show Business popularized media ecology. It was eye-opening.

[4] Marshall McLuhan, Understanding Media: The Extensions of Man (McGraw-Hill, 1964), 18.

[5] See Oxford English Dictionary under “artificial intelligence” (revised December 2023).

[6] A. M. Turing, “Computing Machinery and Intelligence,” Mind 49 (1950): 433–60.

[7] Turing is especially known for his “Turing Test,” which tested a machine’s (computer’s) ability to simulate human intelligence, answering his question, “Can machines think?” see https://en.wikipedia.org/wiki/Turing_test.

[8] I owe this and other clarifications to my friend Gregory Tarsa, software engineering manager.

[9] Rockwell Anyoha, “A History of Artificial Intelligence,” accessed April 11, 2023. https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/.

[10] Joseph Weizenbaum, Computer Power and Human Reason: From Judgment to Calculation (W. H. Freeman, 1976), 7.

[11] Anyoha, “A History of Artificial Intelligence.”

[12] Anyoha, “A History of Artificial Intelligence.”

[13] Anyoha, “A History of Artificial Intelligence.”

[14] Anyoha, “A History of Artificial Intelligence.”

[15] Christopher Mims, “The Secret History of AI, and a Hint at What’s Next,” The Wall Street Journal (April 22, 2023) https://www.wsj.com/articles/the-secret-history-of-ai-and-a-hint-at-whats-next-428905de.

[16] Cal Newport, “Can an A.I. Make Plans?” The New Yorker (March 15, 2024), https://www.newyorker.com/science/annals-of-artificial-intelligence/can-an-ai-make-plans.

[17]  Dean W. Ball, “The Era of Predictive AI Is Almost Over —,” The New Atlantis (summer 2024): 28. The definition of neural networks in brackets is from Arlie Coles, “Demystifying AI,” The American Mind (June 19, 2023), https://americanmind.org/features/the-exterior-darkness/demystifying-ai/.

[18] Ball, “The Era of Predictive AI Is Almost Over —,” 30.

[19] Cf. Andrea Vacchiano, “Artificial Intelligence ‘Godfather’ on AI Possibly Wiping Out Humanity: ‘It’s Not Inconceivable,’” Fox News (March 25, 2023), https://www.foxnews.com/tech/artificial-intelligence-godfather-ai-possibly-wiping-humanity-not-inconceivable; Sam Schechner and Deepa Seetharaman, “Tech Leaders Are Divided on AI's Threat to Humanity,” The Wall Street Journal (Sept. 5, 2023), https://www.wsj.com/tech/ai/how-worried-should-we-be-about-ais-threat-to-humanity-even-tech-leaders-cant-agree-46c664b6; Marc Andreesen, “AI Will Save the World,” The Free Press (July 13, 2023), https://www.thefp.com/p/why-ai-will-save-the-world.

[20] Accessed ChatGPT February 28, 2025 asking for definitions of “artificial intelligence, artificial general intelligence, generative artificial intelligence, and artificial special intelligence.”

[21] Jonathan Barlow, “Artificial Intelligence: Towards a Christian Perspective,” By Faith (May 29, 2024), https://byfaithonline.com/artificial-intelligence-towards-a-christian-perspective/.

[22] Nikolas Prassas, “Large Language Poetry,” First Things (March 2025): 9.

[23] Cal Newport, “What Kind of Mind Does ChatGPT Have?” The New Yorker (April 23, 2023), https://www.newyorker.com/science/annals-of-artificial-intelligence/what-kind-of-mind-does-chatgpt-have

[24] Eric Schmidt, “AI Could Usher in a New Renaissance,” The Wall Street Journal (March 1, 2025): A15.

[25] Weizenbaum, Computer Power and Human Reason, 1, 6.

[26] David Gelernter, “Artificial Intelligence Wasn’t Born Yesterday” The Wall Street Journal (Sept. 19, 2024): A15.

[27] Newport, “What Kind of Mind Does ChatGPT Have?”

[28] Dudley Robert Herschbach is an American chemist at Harvard University. He won the 1986 Nobel Prize in Chemistry jointly with Yuan T. Lee and John C. Polanyi “for their contributions concerning the dynamics of chemical elementary processes.” Wikipedia https://en.wikipedia.org/wiki/Dudley_R._Herschbach.

[29] Walter Isaacson, Einstein: His Life and Universe (New York: Simon and Schuster, 2007), 7, 9.

[30] Prassas, “Large Language Poetry,” 9.

[31] Prassas, “Large Language Poetry,” 10.

[32] Joseph Epstein, “Teaching Writing in an AI Age,” The Wall Street Journal (Feb. 22, 2025): C7.

[33] Epstein, “Teaching Writing in an AI Age,” C7.

[34] Epstein, “Teaching Writing in an AI Age,” C9.

[35] Epstein, “Teaching Writing in an AI Age,” C9.

[36] Epstein, “Teaching Writing in an AI Age,” C7.

Gregory E. Reynolds is pastor emeritus of Amoskeag Presbyterian Church (OPC) in Manchester, New Hampshire, and is the editor of Ordained Servant. Ordained Servant Online, March, 2025.

Publication Information

Contact the Editor: Gregory Edward Reynolds

Editorial address: Dr. Gregory Edward Reynolds,
827 Chestnut St.
Manchester, NH 03104-2522
Telephone: 603-668-3069

Electronic mail: reynolds.1@opc.org

Submissions, Style Guide, and Citations

Subscriptions

Editorial Policies

Copyright information

Ordained Servant: March 2025

Going Peopleless

Also in this issue

A Good Man Is Hard to Find

Poetry: The Music of Particularity
A Review Article

The Great De-Churching: Who’s Leaving, Why Are They Going, and What Will It Take to Bring Them Back? by Jim Davis and Michael Graham

The Hobbit Encyclopedia by Damien Bador, Coralie Potot, Vivien Stocker, and Dominique Vigot

Servant Poetry

Download PDFDownload ePubArchive

CONTACT US

+1 215 830 0900

Contact Form

Find a Church