At Spokeo, we believe in the power of inquiry — not just into public records, but into the deeper questions that shape our world. That’s why we’re proud to support students who challenge boundaries and think critically across disciplines. This year’s Spokeo Scholarship winner, Michael Salvatore Politz, exemplifies that spirit.
Hailing from Baton Rouge, Louisiana, Michael has charted an unconventional academic path — from biological engineering to religious studies and now to a PhD in philosophy at the University of Saint Thomas in Houston. He describes his calling as building a bridge between STEM and the humanities — two worlds often seen as separate but which, in his view, are essential to one another.
Michael’s current work explores one of today’s most pressing and complex issues: Artificial Intelligence. Drawing on his diverse background, he brings both technical understanding and philosophical depth to the conversation. His winning essay, shared below, critiques the dominant assumptions of material reductionism in AI research and argues for a more holistic understanding of intelligence—one that accounts for both the material and experiential dimensions of human thought.
The Limits of Reductive Materialism in AI Development
By Michael Salvatore Politz, University of Saint Thomas—Houston
Though “Artificial Intelligence” has become an umbrella term to stand for everything from self-driving cars to large language models, all instances of AI are united under this artificial aspect. AI is an artificial attempt by humans to replicate aspects of our minds. This pursuit to replicate artificially what occurs naturally to the human person is nothing new to history. We have long searched to externalize our internal processes. From the abacus giving a physical manifestation to the counting process to the systematic representation of signs used in written language, we continually search to produce tools that replicate the human mind in order to make our lives easier. What makes the current pursuit so novel within the field of AI is its methodology. It is also within this methodology that the problems of achieving AGI and the technical singularity lie, and why we are still a long way from reaching these milestones.
Indicative of the modern field of AI is an underlying reductive materialism. The field of AI views the human mind as a group of higher-order mental processes that emerge given sufficient biological (and specifically, neurological) complexity. Once we can synthesize artificial architecture of sufficient complexity, we will similarly see the emergence of these higher-order processes found in the human mind. However, do the current methods within the field of AI allow for the accomplishment of this lofty goal? I argue that they cannot without major revisions to the underlying thought paradigm that currently rules the field.
The fullness of the human mind and its capabilities cannot be realized under the governing reductive materialist model. Thus, any attempt to replicate the mind under this reductive materialist paradigm will be lacking certain crucial aspects. Whether or not this feat is possible ontologically is answered immediately in the negative. From the term “artificial intelligence,” we can see that AI is an attempt at a synthetic version of something that is naturally occurring. As such, AGI and the human mind would forever be two very different things—one artificial and one natural. However, it remains to be seen if we could create, with appropriate artificial architecture, a program of sufficient complexity that could mimic the human mind.
Our ability to create something equal to or compatible with the human mind through programming does not necessitate that the human mind be directly reducible to complex programming, but it does necessitate that a program of equal and compatible capabilities be creatable. Given the fact that ChatGPT, Google Gemini, and the like have already been fooling some professors by presenting “original essays from an honest student,” we seem to have already created AI that can functionally mimic a university student. Just how far this functional mimicry can go is yet to be seen, but I would like to point out two major problems faced in creating AI that can functionally mimic the human mind.
The first of these problems is the irreducibility of the human mind to only its material correlates. The common consensus is that once AI reaches sufficient complexity, it will achieve AGI, and we will be well on our way to the technical singularity. This is based on an emergent view of consciousness within the human mind. As humans, we have achieved sufficient neurological complexity and, as such, we have a new higher-order means of understanding the world around us within generalized or abstract terms. However, the abstract thought of the human mind is not due to internal complexity but rather external dynamism. It is through phenomenological means that we find the answer to the uniqueness of the human mind, not through material reductionism.
Take, for instance, the problem of qualia that currently plagues the philosophy of mind. How is it that the entirety of a subjective state is reducible to material correlates? How do you explain the sensations of walking on the beach on a sunny day entirely by bioelectrical activity within the brain? It seems you cannot. Additionally, how are the abstract thought processes, which are a hallmark of the human mind, explainable in material terms when they seem to be utterly impossible by strict material bounds? For example, when we hold the concept of “a dog” within our minds, we hold a specific image (say, of your pet dog) within our minds. However, the essence of “dog,” which stands simultaneously for the specific dog we are picturing and for the knowable essence of the entire species of “dog,” is held within our minds without having to physically stuff either the specific dog we are picturing or every dog in existence into our skulls. All this to say that we are still trying to unravel the mysteries of the human mind, but it appears most certainly that its operations are not entirely reducible to only its material correlates.
Moving on to the second issue, the human mind and its powers emerge within a human person through the course of a lifetime. Unlike features like a bird’s wings or a spider’s silk, mental powers cannot be isolated and synthetically mimicked (for example, in the case of airplane wings and synthetic silk) from the organism within which they reside. Most of the fundamental mental operations that AGI seeks to mimic develop within a human being over a lifetime and through a complex series of experiences. For instance, something like “self-awareness” (long heralded to be a marker of AGI’s achievement) is not something that just erupts within the human mind after the prefrontal cortex has developed enough. Rather, it comes about through interacting with others. It is through engagement within the world and with others that we come to know ourselves. Similarly, the ability to conceptualize and articulate complex theories (like that of “artificial intelligence” in the first place) is not something innate within every human mind. It must be learned through years of strenuous education. Common to all these is the innately human way of knowing—that of learning. We are fundamentally curious creatures who want to know the world around us and our place within it. The accomplishment of this feat is done by growing our mental abilities and knowledge through a life lived within the intersubjective medium of the world around us. We relentlessly seek novelty and insight through engagement within our world. This way of learning is fundamentally transcendental, as we must transcend our current intellectual bounds and learn something new. Whether or not complexity in programming could ever mimic this transcendental feature of human knowing has yet to be seen and is doubtful.
Overall, creating something like AGI and achieving the technical singularity utilizing the material reductive methodology prevalent in the field will run into both the ontological and phenomenological problems outlined in this essay. The human method of thinking—utilizing our freedom to choose and learn—is fundamentally non-deterministic, yet we try to replicate it in AI utilizing deterministic bounds. The closest we can come to achieving this functional mimicry of the human mind sought in AI is a further complexity in programming. For AGI to properly mimic the human way of knowing, it must find some way to transcend the underlying thought paradigm of material reductionism that currently defines it. Otherwise, milestones like AGI and the technical singularity will continue to remain a distant horizon.