Read an Excerpt from “An Artificial History of Natural Intelligence” by David W. Bates | The Chicago Blog

UChicagoPress
8 min readApr 30, 2024

--

In An Artificial History of Natural Intelligence, David W. Bates offers a new history of human intelligence that argues that humans know themselves by knowing their machines. In this excerpt from the book, David describes a philosophical malaise in our approach to AI and argues that we have come to a point when we must fundamentally rethink what it means to be human.

The historical evolution and development of artificial intelligence (AI) has long been tied to the consolidation of cognitive science and the neurosciences. There has been, from the start of the digital age, a complex and mutually constitutive mirroring of the brain, the mind, and the computer. If to be a thinking person in the contemporary moment is to be a “brain,” it is also true that the brain is, in the dominant paradigms of current neuroscientific practice, essentially a computer, a processor of information. Or, just as easily, the computer itself can become a brain, as the development of neuromorphic chip designs and the emergence of “cognitive computing” aligns with the deep learning era of AI, where neural networks interpret and predict the world on the basis of vast quantities of data. These disciplines and technologies align as well with a dominant strand of evolutionary theory that explains the emergence of human intelligence as the production of various neural functions and apparatuses.

In a way, this is a strange moment, when two powerful philosophies of the human coexist despite their radical divergence. For in the world of social science theory, science and technology studies, and the critical humanities, the dominant framework of analysis has emphasized the historicity and cultural plurality of the “human,” and has, over the past few decades, moved more and more to a consensus that humans are just one part of distributed, historically structured networks and systems that subject individuals to various forms of control and development. We are, that is, functions in systems (political, economic, social, moral, environmental, etc.) that seem so familiar and almost natural but can be relentlessly critiqued and historicized.

On the other hand, we have a conceptual and disciplinary line that has increasingly understood human beings as essentially driven by unconscious and automatic neural processes that can be modeled in terms of information processing of various kinds, and the brain is the most complex network mediating these various processes. The result, to borrow the title from a cognitive science paper, is a new condition, namely, “the unbearable automaticity of being.” For the cognitive scientist, the human will is demonstrably an illusion, appearing milliseconds after the brain has already decided in controlled experimental conditions. Consciousness, while still a philosophical problem, is understood as just another evolutionary function, linked now to attention mechanisms that can prompt responses from the unconscious space of operations. Whether we are thinking “fast” or “slow,” to use Daniel Kahneman’s terms, the system of human cognition as a whole is encompassed by the brain as the automatic-and autonomous-technology of thinking. What else could thought be in the contemporary scientific moment? As one psychologist observed a while ago, “Any scientific theory of the mind has to treat it as an automaton.” If the mind “works” at all, it has to work on known principles, which means, essentially, the principles of a materially embodied process of neural processing. Steven Pinker, whom humanists love to hate (often for good reason), has put it bluntly: “Beliefs are a kind of information, thinking a kind of computation, and emotions, motives, and desires are a kind of feedback mechanism.” However crude the formulation, the overarching principle at work here is important. Cognitive science and neuroscience, along with myriad AI and robotic models related to these disciplines, cannot introduce what might be called a spiritual or transcendental element into their conceptualizations. Even consciousness, however troubling it may be, can be effectively displaced, marked as something that will eventually be understood as a result of physiological organization but that in the meantime can be studied like any other aspect of the mind. As the philosopher Andy Clark claims, a key contemporary philosophical issue is automaticity: “The zombie challenge is based on an amazing wealth of findings in recent cognitive science that demonstrate the surprising ways in which our everyday behavior is controlled by automatic processes that unfold in the complete absence of consciousness.”

Much as we may not want to admit it, Yuval Harari, of Sapiens fame, is probably right about the current moment, in at least one crucial way. As he says, “we” (cognitive scientists, that is) have now “hacked” humans, have found out why they behave the way they do, and have replicated (and in the process vastly improved) these cognitive behaviors in various artificial technologies.

In the last few decades research in areas such as neuroscience and behavioural economics allowed scientists to hack humans, and in particular to gain a much better understanding of how humans make decisions. It turned out that our choices of everything from food to mates result not from some mysterious free will, but rather from billions of neurons calculating probabilities within a split second. Vaunted “human intuition” is in reality “pattern recognition.”

While we (rightly) rail against the substitution of human decision making, in judicial, financial, or other contexts, by algorithms, according to the new sciences of decision, there is nothing more going on in the human brain, and to be fair, it isn’t like humans were not exemplifying bias before the age of AI. As we know, the development of algorithmic sentencing, for example, was motivated by the desire to avoid the subjectivity and variability of human judgments.

In any case, we have to recognize that Harari is channeling the mainstream of science and technology on the question of the human: since we are, so to speak, “no more than biochemical algorithms, there is no reason why computers cannot decipher these algorithms-and do so far better than any Homo sapiens.” Hence the appearance of recent books with such horrifying titles as Algorithms to Live By: The Computer Science of Human Decisions, which helpfully introduces readers to concepts from computing that can improve their day-to-day lives, and Noise: A Flaw in Human Judgment, which advises us humans to imitate the process of clear, algorithmic objectivity. But my main point is that Harari reveals the contemporary crisis very clearly: it is a crisis of decision. “Computer algorithms,” unlike human neural ones, “have not been shaped by natural selection, and they have neither emotions nor gut instincts. Hence in moments of crisis they could follow ethical guidelines much better than humans-provided we find a way to code ethics in precise numbers and statistics.” Computers will make better and more consistent decisions because they are not decisions in crisis but applications of the rule to the situation, objectively considered.

The backlash against this vision of AI, however well intentioned, has often been driven by just the kind of platitudes about the “human” that humanists and social science scholars have been dismantling for decades (if not centuries). New centers for moral or ethical or human-compatible AI and robotics assert a natural “human” meaning or capacity that the technology must serve-usually couched in the new language of “inclusion,” “equity,” and “fairness,” as if those concepts have not emerged in historically specific ways or have not been contested in deadly conflicts (in civil wars, for example). As the home page for Stanford’s Center for Human-Centered Computing proclaims, “Artificial Intelligence has the potential to help us realize our shared dream of a better future for all of humanity.” As we might respond: so did communism and Western neoliberal democratic capitalism.

But what do the critical scholars have to offer? At the moment, it seems that there is a loose collaboration that is hardly viable for the long term. One can critique technical systems and their political and ideological currents pretty effectively, and in the past years much brilliant work on media and technology has defamiliarized the image of “tech” and its easy “solutionism” with research on the labor, material infrastructures, environmental effects, and political undercurrents of our digital age.

And yet: What can we say in any substantial or positive sense about what can oppose the “new human” of our automatic age? What will ground a new organization or animate a new decision on the future? “Inclusion,” for example, is not a political term-or maybe more accurately, it is only a political, that is, polemical, term. The challenge, obviously, is that the consensus among critical thinkers of the academy is that there is no “one true” human, or one way of organizing a society, a polity, or a global configuration. However, lurking in much contemporary critique is a kind of latent trust in an “automatic” harmony that will emerge once critique has ended-a version of Saint-Just’s legitimation of terror in the French Revolution.

We are facing then a crisis of decision that must paradoxically be decided, but the ground of decision has been dismantled; every decision is just an expression of the system that produces it, whether that is a brain system, a computer network, or a Foucauldian disciplinary matrix. Is it even possible to imagine an actor-network system “deciding” anything? When we have undercut the privilege of the human, where is the point of beginning for a new command of technology, one that isn’t just a vacuous affirmation of “multiplicity” or diversity against the Singularity? Or a defense of human “values” against technical determination?

I want to suggest that the current crisis demands a rethinking of the human in this context, the evolution of two philosophies that seek to dissolve the priority of decision itself. This cannot be a regressive move, to recuperate human freedom or institutions that cultivate that freedom. We must, I think, pay attention to the singular nature of automaticity as it now appears in the present era, across the two philosophies. The goal of this project has been to rethink automaticity, to recuperate what we can call autonomy from within the historical and philosophical and scientific establishment of the automatic age. What I offer here is not a history of automaticity, or a history of AI, or a history of anything. There is no history of AI, although there are many histories that could be constructed to explain or track the current configuration of technologies that come under that umbrella. But this is also not a “history of the present,” or a genealogy, that tries to defamiliarize the present moment to produce a critical examination of its supposed necessity through an analysis of its contingent development. There has been much good work in this area, but at the same time, the conceptual or methodological principle is hardly surprising. We (critical humanists) always know in advance that the historical unraveling will reveal, say, the importance of money, or political and institutional support, or exclusions in establishing what is always contingent.

David W. Bates is professor of rhetoric at the University of California, Berkeley. He is the author of three books, including Enlightenment Aberrations: Error and Revolution in France.

An Artificial History of Natural Intelligence: Thinking with Machines from Descartes to the Digital Age available now on our website or wherever good books are sold.

Originally published at https://pressblog.uchicago.edu on April 30, 2024.

--

--

UChicagoPress

One of the oldest and largest university presses in the United States and a distinguished publisher of trade and scholarly books and journals.