Is AGI a Faith-Based Belief?
Join me for an interview with John C. Havens as we evaluate the faith-based value of a belief in AGI, its ethical implications, and how it shapes narratives around technological progress.
John C. Havens, author of Heartificial Intelligence.
I’ve had the privilege of knowing John C. Havens for slightly over a year. Initially, we connected over a thought-provoking post he’d authored detailing how only half our humanity is catered for in a lot of “progressive” AI spaces. That kind of nuanced, philosophical, and openly heart-centered rhetoric was unlike anything I’d seen in the AI ethics space. He introduced me to his work on loneliness and human connection through a live workshop, Togetherland, where I spoke on navigating loneliness as an international student moving from a communal culture to the West’s more individual one.
As the author of Hertificial Intelligence, a cutting-edge, future-facing book on AI ethics, technological progress, and the human experience, his writing consistently demonstrates thought leadership and strategic foresight in the field. In his article, AGI is a Faith-Based Belief , John posits the idea that if there is no universally agreed-upon definition of what Artificial General Intelligence (AGI) is, then arguably any person or organization’s unique definition is a faith-based belief. The article’s boldness, strength, and clarity of thought stood out to me. Traditionally, matters faith and science are deemed opposite ends of the spectrum with few daring to connect, conflate, or even relate the two.
Heartificial Intelligence, cover image obtained from Amazon
Below is the conversation we had centered around his LinkedIn article:
T: You argue that AGI is a faith-based belief because it is treated as an inevitable reality despite not existing yet. Could you expand on how you define 'faith-based belief' in this context and how your background training to be a Minister in the Christian tradition influences this?
J: I’m not actually a minster. I did go to college thinking I’d be one though. I’m a big fan of history and majored in theatre and history.
You asked what I mean by a faith-based belief. There are formal religious traditions associated with “faith” where there’s some sort of cultural institution like a church, a temple, or a mosque linked to the belief system. I mean, I believe someone who could be called an atheist or agnostic could still have faith. It means they choose to believe in something based on their life’s research.
AGI as a faith-based belief isn’t a negative thing. When people want to believe in AGI, however, they should define it. Depending on how they state those beliefs, it’s the same as me saying I believe in Jesus or someone saying they’re agnostic. The concern in the article or in general, is when anyone proselytizes, which essentially means you have to believe what I believe.
T: Thanks for the clarification - that it’s not negative but rather the qualm comes from proselytization. In religion, proselytization is often seen as a way to “shield people from harm”. An apt example is with colonialism and the missionaries in settler colonies. There was an emphasis on preaching “the good news” to the native people so that they could be saved and not subject to eternal damnation. The way that I understand proselytization, there’s this innate wanting to keep people away from detriment. In your opinion, do you see a parallel in how AGI is being promoted as an inevitable development?
J: There’s a lot of parallels. In any AI system - I prefer to use the word "system" when discussing AI because, in the same way that a car cannot run without energy, whether electricity or gasoline, AI systems also require something to power them. Just as a car needs fuel to function, AI relies on data and human input to operate. Algorithms and AI systems are driven by human data that we often offer up in ways that we do not understand. Through surveillance capitalism, it’s taken from us. We don’t feel like we have genuine consent. In that sense, it’s data colonialism. I first learned that word from a Maori friend who stated that it is data colonialism for Silicon Valley, binary-based, Western rationalist code to be used for whatever systems.
There is generally a complex relationship between colonialism, surveillance, and the Māori people. Settler colonialists established surveillance systems to perpetuate and exert control over racialized social divisions. Extrapolating this to today’s world, these racialized surveillance practices persist in modern data collection methods, often extracting data without consent. My friend explained that in his tradition, when Māori people enter their sacred spaces such as the marae, they follow a custom of not recording those moments. The logic behind this is that some knowledge and experiences are meant to remain within those spaces, rather than being captured or stored. This perspective resonated with me because it highlights that just because someone isn’t explicitly trying to colonize or impose their beliefs, it doesn’t mean they aren't engaging in an extractive practice.
As a young person of faith in high school - who believed in Jesus- I saw faith as something you shared openly, had a good fervent conversation, and then allowed someone to make their own choice. If I share what I believe and someone else disagrees, our relationship can still grow through mutual understanding. It's like dating: people make their own choices; you can’t force them into a relationship.
When it comes to AGI and data, true consent is often absent. This is a form of data colonialism, though I want to be mindful of using that term since "colonization" itself is tied to a white, Western historical framework of slavery and oppression. However, in digital spaces, taking control of someone’s data means taking control of their identity, making it a similar form of domination. Maybe we need different terms to distinguish these issues without unintentionally conflating them in a way that could be offensive.
T: I do agree with you, deeply, even with the word “colonization”. I think it’s important to call things out as they are. Maybe we could term it a form of neo-colonialism. Rather than colonization which occurs within geographical bounds, neocolonialism is about the power imbalances and the powers that be. Something you spoke about in the article that I’d love to delve into is the aspect of free will. You mention that the current narrative around AGI is causing harm. Could you elaborate on what specific harms you see arising from this narrative?
There are many harms. For one, there’s no formal, universally agreed-upon definition of AGI. When there’s no definition of something then that’s harmful in and of itself. So when people are like “AGI is gonna be here in two years”, what does that mean?
I’ve respected Ray Kurzweil for many years, mainly for the organ that he made for Stevie Wonder. The organ was incredible and anyone who can impress Stevie Wonder, you gotta give them credit. But how does someone of his notoriety adjust a narrative multiple times? He is incredibly smart and a lot of his predictions have come true but he has changed his AGI predictions several times. The point is not to discredit him but rather to highlight the ambiguity.
Here’s another part of the harm, most of the time AGI narratives are couched in war rhetoric where it’s literally, “we have to beat China” and “ beat this”, and on top of that, AGI is almost always defined as a point where humans will be “beaten” in a certain way either through cognition or through tasks. And I keep thinking, why would anybody be excited about any form of technology that could potentially cause us as humans so much harm? It denies the relational side of us.
There’s this beautiful paper, I think I mentioned earlier and we talk about it all the time, a friend of mine called Sabelo from South Africa wrote this beautiful article in 2020 called, “From Rationality to Relationality”. It’s a paper on AI Governance and colonialism with the idea of Ubuntu ethics, coming from Africa, virtue ethics. If we are rational as humans, cool but we also have relationality. When someone connects with someone else it’s not just connection. Some may call it spirits, some may call it vibe, where that is denied in any way, we deny a large part of our humanity.
Also, I think the media narrative, in one sense, is a first step towards denying free will. Oftentimes, what’s not talked about is that most media systems are designed from an advertising standpoint. We are kept from a full sense of our data so we will be inspired to do things - usually give our data away or manipulated to do so if we don’t have genuine consent or are nudged to buy stuff. Defining everything within the culture of consumerism can be boring to talk about but it’s absolutely true.
I’ve worked in different settings and if you can get a human to click on a certain button then what does free will mean?
A screenshot of John and I during our conversation
T: That’s a very philosophical question. Do we have free will in this media-driven world? Something else that you mentioned is the Ubuntu paper which segues us into our next question: In your article, you mention that AI and AGI are often seen as markers of 'societal progress,' especially when they surpass human attributes or abilities. How do you see this definition of progress shaping our values and priorities as a society? Do you believe this focus on surpassing human abilities is the right measure of innovation and progress?
J: AGI as a marker of societal progress, for me, John, is absolutely illusory and I guess malevolent if that’s too strong of a word. AI systems, especially LLMs have been harming the planet with data centers using more energy than has been used in other manifestations of technology. I’ve been studying a lot about water. There’s a respected organization called UpTime Institute that does surveys and is one of the main associations for data centers. In 2022, they interviewed 800 data centres of which only 39% reported their water usage. It’s only 800 of them and I’m unsure how many data centres are there in the world. But when you think about it, less than half of their data centers said that they measured their water usage. It wasn’t that they didn’t report it to UpTime. Only half of them said they measure water. And most data centers use water-based air conditioning.
I say I might be missing something but anytime you read anybody saying anything about LLMs and AGI, and how in a few years we’ll be okay and we’ll use AI to save the planet, that’s not just illusory or harmful, it’s pre-meditated, knowledgable, absolute mendacious, horrible lack of anything positive. When companies expand into countries like Chile—one of the major locations for new data centers—they often build in drought-prone areas. Should we hold them accountable? My answer is yes!
How can we call it societal progress when water isn’t accessible to everyone, Indigenous land is taken, and animals, except for profit-driven cattle farming in the Amazon, are deprived of resources? On top of that, local communities often lack information about their own water supply. What do we mean by societal progress?
To turn it positive, you asked about other metrics, there’s a term called flourishing which isn’t short-term happiness per-se. That’s hedonic happiness which most advertising works on. Advertising works on “I want that”. Hedonic happiness is who we are. We all struggle with eating too much, buying too much, but the point is that flourishing is also the recognition that sometimes life happens, a loved one passes away, it’s better if you can grieve, floods happen, whatever it is.
With flourishing, there are metrics like the SDGs some ESGs and mental health. For me, I believe that when you build any form of AI system, at the beginning of the design process, accountable flourishing metrics for people and the planet should be embedded. Why are you still building them? Not when they are released. The question to ask is are we improving those two things? Human and planetary flourishing? And if the answer is “no”, then keep testing. If the answer is “we’re harming in a way that points to those metrics,” then we stop. Once those two things are considered, we will be going in the right direction and we have a potential profitable business. Sustainability is not “I’m green, I’m part of this political party”. Your grandkids, most of whom will suffer these climate issues, won’t get to continue in any positive state, let alone flourish if there isn’t a societal progress metric that says, “wait a second, these tools are very exciting and cool but they are anthropomorphizing by design, they are bad architecture, they hallucinate, they aren’t improving legal, or medical and they keep making mistakes. They are keeping people from creativity.”
I complained to my wife when I was watching the Olympics. I saw an ad where the guy was saying how his daughter loved an athlete and he told her to write her a letter using Google Gemini. That’s like saying, “And then my daughter wanted to be an athlete so I told her to use performance-enhancing drugs and taught her to replace her legs with bionics. I told her she didn’t have to train at all if she could figure out how to make herself faster using the right drugs to be able to run.”
If that’s deemed progress, I don’t want that progress.
If societal progress really boils down to what it normally does which is speed, progress, productivity, and justifying all those things then we’ll never be able to take a step back and take a deep breath and go hold on, “Can you test these tools?"
California has this big bill right now where they say the testing of these tools is gonna be hard. My question is, why does something being hard mean you have to feel threatened?
There’s an economist named Esther Duflo who said that the 3000 richest people should be taxed on their wealth, not their income. If Duflo’s numbers hold correct, if 1000 people were taxed 2% just on their wealth, you’d have 250 billion dollars available annually to go to Caribbean countries like Barbados and other ones living in a consistent state of climate emergency.
From my understanding, if you’re that rich, 2% is gonna be a huge hit in one sense but you have your stock market person put the money in the market and you’ll make it back in 6 months. For them, it’s become somewhat meaningless. The 2% could mean the rest of the world has a better chance at biodiversity, sovereign data for humans, just existence. And when one studies systems, the things about climate immigration and other things, even if you hate people, hate the planet, and don’t want your money to go away, a good money adviser would tell you to pay the 2% then things remain exactly as they are.
T: What a nuanced take. At such a critical time as this, it’s a moral imperative to look at the world through a systems-thinking lens. Everything is interconnected, the acceleration towards super-human technology and the increasing food prices due to arid weather conditions. Thank you so much for your time, John. Always a pleasure speaking with you.
J: Thanks Tunu. This was a wonderful conversation.
cannot understand much but looks so cool!