By Shakeel Rashed
Meghan O’Gieblyn has a classic story to tell in her new book, God Human Animal, Machine: Technology, Metaphor, and the Search for Meaning. She grew up in a fundamentalist Christian household and was studying at Moody Bible Institute, one of the top evangelical seminaries. But she dropped out. And you might guess the reason: for a lack of faith and belief. In her own words from her other book, Interior States, “Taking a cue from Plato, she imagined that experience to be like ‘exiting a primitive cave and striding onto terra firma, where there would be no more shadows, no more distant echoes, only the blinding and unambiguous light of science and reason.’” It wasn’t so. There was no clarity of science and reason. She worked at a bar, relying on alcohol in her disenchantment. Then she came across the book, The Age of Spiritual Machine by Ray Kurzweil.
Kurzweil, as many may already be aware, is a well-known futurist and the prophet of techno optimism. He ran the prestigious MIT Media labs, was later hired as an advisor to the Google co-founders, and was behind several institutions such as a Singularity Institute. A few decades ago, just around the time of his first major book publication, Kurzweil and my own tech startup shared a Public Relations (PR) consultant. I heard some great stories about how PR-aware he was and how focused on maintaining his health—to be able to live as long as possible to see his predictions come true. With the recent surge in Artificial Intelligence (AI) development, many of his predictions are already on the way. A lesser known fact about him: his father, a conductor and pianist, migrated to the U.S. from Austria through the generosity of one of his patrons who arranged his trip before the Nazi occupation. His daughter, Amy Kurzweil, a cartoonist, is now trying to build a chat bot based on her grandfather’s writing, basically bringing him back to life, virtually.
I have worked in emerging technologies all my working life. Early on, I chose my Masters program at George Washington University because I’d read a book written by one of their professors on Expert Systems, an early version of AI. More than tech, I have been fascinated by technology adoption and how it affects people’s lives. I have followed Kurzweil and other futurists with much optimism and hope. In the last six years, in my work as an advisor at Austin’s Capital Factory, I have heard quite a few AI startup pitches. On the other side, I grew up surrounded by various religions in India and saw how religion and spirituality affects people’s lives as well and I’ve always been curious about that.
Discussions on AI, especially in the startup and tech community where I am so embedded, come in two flavors—a very optimistic view on how this technology will make life better by its predictive powers automating most of our mundane tasks, or a very pessimistic view where Skynet takes over, making us all slaves to the machines. O’Gieblyn’s book, with a very appropriate title, God Human Animal Machine – Technology, Metaphors and the Search for Meaning, asks deeper questions about AI and how it might affect our lives. Her premise of how we define ourselves as human and how we can redefine ourselves in this new era in the search of Artificial General Intelligence, really appealed to me. It tried to allay anxiety around AI, especially with the recent explosion of generative AI tools such as ChatGPT and Midjourney that caught everyone’s attention in the media and, more importantly, in their imaginations. Questions like what impact it will have on thousands of jobs, what kind of biases will we embed in the AI developed based on training from western-view literature and data, what kind of mischief will people create with AI from deep fakes to targeted content, and, ultimately, will robots and AI take over the world? O’Gieblyn’s book leads readers to see meaning –what AI means to us and what new meaning it brings into human life as its creator. Just with that background, you can see why a book with this title would appeal to me and why I read it as soon as a friend recommended it.
O’Gieblyn writes her personal essays as meditations, taking us through several concepts overlapping theology, philosophy, and modern science. She starts with her experience with Aibo, the lifelike robot pet dog from Sony. Perhaps because it was equipped with many sensors and programmed to react to the smallest details, she really got attached to the ‘dog.’ She had a hard time putting it ‘off’ when she left home. She was not alone finding herself with feelings for the animal in the robot. She had a similar experience in her neighborhood with people cheering and encouraging a delivery robot to cross the street. Her husband, the pragmatic, was suspicious about what the ‘pet’ was recording and who it was sharing this data with. It was still a machine for him. Several great novels and films have characterized robots in an anthropomorphic form, from HAL2000 to Knightrider, but this angst and attachment, including looking out for its well-being, is what was surprising. Based on how much the robot acts as a real dog and her own feelings about it made her believe it could be ‘conscious’ and she questions her own consciousness. Both consciousness and soul are questioned and dissected throughout the book.
When she jumps into her Kurzweil story and her introduction to his books (the concepts of singularity, etc.), she connects the dots from his Transhumanism to earlier references. Most people associate Transhumanism, as the term is used today, with enhancing human capabilities through the use of technical developments such as automobiles, cell phones, or GPS navigations, and with Julian Huxley of the famous Huxley family. (Aldous being the most famous for his influence on psychedelics in culture or, for English readers, of Bhagavad Gita; but he deserves a whole other article). O’Gieblyn traces the origins in a convincing way to the English translation of Dante’s work, where the word “transhuman” was first used to show transcendence into a different realm—embracing how the body (or its enhancements via technology) experiences the world at its fullest without fears, desires, or biases. Not being told how an experience should be judged, in the end, reveals the real intentions.
Last year, a good friend recommended Professor John Vervaeke’s very informative philosophical YouTube course called “Awakening from the Meaning Crisis.” I am still only through about 20 episodes of this 50-plus series because it is so information-rich that I often look up other articles and books suggested in it. I could swear O’Gieblyn had gone through this course when she works through many philosophical concepts from Descartes’ Dualism to Panpsychism. But in a recent podcast I heard her say that she did not know who John Vervaeke was when the podcaster asked.
Her panpsychism discussion starts with a story of a person in Oregon trying to teach a rock to speak. Having visited some remote areas in Oregon recently, I can easily believe this. Panpsychism is nothing new to people surrounded by Hindu mythology of Brahman or the idea of God being omnipresent, but can we extend this to machines? Especially machines that were made by us? This section of the book is appropriately named IMAGE, connecting the idea that we are all created in the image of God, the first metaphor.
One section that I would read more closely is the Paradox. This is where she takes a deeper dive into quantum physics concluding that both quantum physics and religion highlight the limits of human knowledge and understanding. If religion does not have all the answers, neither does science. But, of course, science is ever evolving—and, from what I know and have experienced, religion is adaptive more than evolving, albeit more slowly.
Taking a theological turn, she goes into detail in a section on Dostoevsky’s The Brothers Karamazov, especially the encounter between Ivan and Alysho in a bar where Ivan justifies his atheism via narration of various stories of evil in the world to Alysho’s surprising reaction. This is where her Bible school training is evident as she takes on complex questions of the existence of God, getting an answer from fiction rather than all the philosophical discussions put forth by generations of thinkers. Is this question relevant for this discussion? I guess it is if we are going to create artificial intelligence or if we are able to defy natural laws by increasing human longevity beyond what modern medicine has given, living in the singularity as man-machine or becoming the creator ourselves.
A section of the book examines algorithms running, or at least augmenting, our lives based on various patterns in our life. She then takes us through the master algorithm by Prof Pedro Domingo where he proposes that in the near future, machine-learning algorithms will grow to a perfect understanding of how the world and people in it work. Then she explores Dataism. The term has been expanded to describe what historian Yuval Noah Harari, in his book Homo Deus: A Brief History of Tomorrow from 2015, calls an emerging ideology, in which “information flow” is the “supreme value.” Of course both of these two distinct theories rely on the availability of data for algorithms to learn from. A phenomenon I have watched is when the algorithms provide some information to people who then think it was sent by Divine intervention. I have seen this on YouTube with some very religious people as well as some wellness/yoga “gurus” as if the Spirit is guiding this information to people.
No discussion on self-learning algorithms can ignore Microsoft’s attempt to develop a chatbot by using data from the internet and how quickly it developed extreme fundamentalist positions. This section of her book may soon need an update or addendum, especially after all the new generative AI tech that the world has been exposed to from ChatGPT to Dalle-E and Midjourney to create ‘art.’ Developments in generative AI caused us to question our concepts of what it means to be creative, which we all assumed was based on emotions. If so, then how is AI able to generate ‘art’ or create music without emotions?
If you watched the movie The Matrix, you know what living in a simulation means. The simulation hypothesis proposes that what we experience as the world is actually a simulated reality like a computer game. Most kids playing virtual reality games have experienced living in some sort of simulation too. This theory, though, has a long list of philosophers who proposed it. Nick Bostrom is credited for this theory in the modern world and everyone, including Elon Musk, refers to him. I associate it with Maya jaal, from Indian mythology, or living in an illusion. Whether we live in a simulation, an illusion, or a dream, there is a lot of discussion on what consciousness is or not. Can machines be conscious? Do we need them to be conscious?
While it seems important to ask this question, many technologists are not really interested in whether machines are conscious or not. The technologist’s view is: does it solve a problem or mitigate any risks. Most technologists think tech solves the problems, even if it creates some. You can see this in climate change discussions—many very influential technologists believe that slowing down human progress is not the answer, but rather technical developments and adoption of those developments will make the real change. So, mulling over whether AI has consciousness is not the right question. Technologists are more interested in “does it improve quality of life,” “does it save lives, or extend lives via longevity.” In a recent interview with Krista Tippet on her ‘On Being’ podcast, Reid Hoffman reiterated this position. Or, if we think in a very materialistic way, there are many techno-capitalists who are just interested in “does it make money in the long run? There are, of course, exceptions to this, such as Dr. Stephen Wolfram of Mathetica and his quest for understanding the universe through computational language. I have had the good fortune to discuss this with him as Capital Factory hosts him during his SXSW visits to Austin, and there is no one more genuine and curious in his quest..
Many questions, no doubt, but ultimately it comes down to whether we want to optimistically embrace this new tech, shaping it and letting it shape us, or do we want to control it such that it always remains subservient to the creators. What does it mean to be a human as we transition into being a transhuman? Will the advent of AI truly make us more human, or into a god, or an animal, or just meld us into machines?
Although these questions may sound really important, it seems there is a better way to think about it. Not just from the perspective of technology optimism as mentioned above from Reid Hoffman and others, but just take our own individual and collective wants and desires. As skeptical as we have been of social media, many have adopted it for the benefit for themselves and their communities. Most religious institutions have embraced it to more effectively engage with their communities. Tech saved many of these communities through the pandemic by developing tools to help keep people connected to each other. During the early days of GPS, people were wary of it but soon adopted it en-mass and now many people cannot drive without it. The future in AI and other scientific developments is going to be the same. We are already seeing churches experiment with AI sermons, where else but in Austin. I have talked to several religious leaders who use ChatGPT to help them write their messages better. Religious and spiritual communities have always adopted new scientific and tech developments. Quantum physics is already used by progressive religious folks to better understand God as the Divine energy everywhere. In the Muslim community the Quran clearly leaves room for that interpretation of Jinns, energy beings created by Allah. Whether we find aliens or aliens find us, I am sure our scholars will have a religious explanation of that too. In the end, I suggest we consider a more Sufi approach—take what is good by your truth, do not harm, be curious and move on.
One Response
Thank you for this blog post, Shakeel! I have started this book several times, and you have inspired me to pick it up again. This is the conversation of the future (present!) and I am so thankful that you brought the issue to The Abbey blog. While we learn more about AI, wonder about it’s implications, and try to integrate it into life in a way that benefits the common good, I most appreciate the last sentence in your post – the Sufi encouragement to “take what is good by your truth, do not harm, and move on.”. In the most complex of situations we need simple guideposts that help us discern the best path. This Sufi one reminds me of “Do justice, love kindness, and walk humbly with your God” from Hebrew scriptures. Or, “Love God and love your neighbor has yourself” in our Christian scriptures. I hope you bring us more posts about AI in the future. Thanks again.