Illustration by Andrei Pacea

Artificial? Naturally!

Alec Bălășescu

Abstract

This article invites the reader to follow seemingly unrelated paths towards the same goal: making sense of what it means to be human in a world that casually blends discourses on nature, technology, and biology having at their centre the ideas of progress, optimization, and their capitalization. Within this type of current thinking, the challenges posed by climate change could be addressed technologically, the dream of ecological capitalism could continue ad infinitum, and Artificial Intelligence would be instrumental in fulfilling this promise. A closer look at the politics of optimization within and outside managerial perspectives may teach us otherwise: one of the main sources of our repeated failures related to governance and climate change lies not intrinsically with the qualities of the tools we use, but in the underlying assumptions with which we design, and the purpose for which we use, them. Between the rock of technology and the hard place of nature, humanity needs to find a new way to relate with both in order to avoid being squashed. That is, we need to revise our implicit assumptions for building our tools, to critique the thinking about our relationships with them, and to re-assess their use in order to move away from the illusions of the possibility to mitigate climate change effects and the real danger of perpetuating the status-quo of capitalist extractivism under the guise of an ecological one.

AnthroArt Podcast

Alec Bălășescu

Author

Alec Bălășescu is an anthropologist (PhD at University of California, Irvine) the author of “Paris Chic, Tehran Thrills. Aesthetic Bodies, Political Subjects” (2007), “Voioasa expunere a Ordinii Mondiale (The Joyous Display of World Order” (2010), and ” Intr-o zi, orașul. Adoptă un canadian (The City, One Day. Adopt a Canadian” (2019). He teaches at Royal Roads University, Victoria, in Canada. He is co-founder of Nature, Art and Habitat, Italy (nahr.it), Moving Matters Travelling Workshop (UC Riverside, California) and Re-genera (Canada and Romania – re-genera.eu). He prefers the practical side of the anthropological method. 

Andrei Pacea

Illustrator

Born and raised in Codlea, Romania, Andrei Pacea is a 3rd year illustration student at Willem de Kooning Academie in Rotterdam, the Netherlands. His work is constantly developing, but his current focus is on better blending text with image. He draws inspiration from architecture, nature and pop culture and creates digital and traditional art. He has had numerous collaborations, including Decât o Revistă, Super Film Festival and Forum Apulum.

Daniel Popa

Actor

Daniel decided to become an actor so that he could experience feelings and events that otherwise won’t fit in one’s lifetime. He collaborated with Bulandra Theatre and the Monday Theatre @ Green Hours and attended many national and international festivals. Since 2013 he plays in projects written, translated, or directed by himself and produced by his Doctor’s Studio Cultural Association which he also founded. Daniel doesn’t know if this is the way to approach new forms of artistic expression, what’s certain is that he distances himself from the old ones.

This article invites the reader to re-evaluate how we relate to nature, culture, and technology in the way we organise society based on “extractivism”: extracting resources of all kinds and using them for unlimited growth. Looking at the triad of nature, culture, and technology, the article argues in favour of harmonising these elements to provide a stable foundation for the continuation of life on Earth. The triad can just as easily be a recipe for catastrophe: the choice is ours. Historically, designing technologies to meet the challenges we faced as humanity rebuilt us as humans. If we look carefully at the politics of optimisation within and beyond managerial perspectives, we will find that one of the main sources of our repeated failures related to governance and climate change does not intrinsically lie with the technology but the purpose for which we design it, and how we use it.

The conclusion will follow naturally: the mental tools used to construct our instruments need to be re-examined, critiqued, and re-evaluated in order to move away from the real danger of perpetuating the status quo of extractivism.

Two seemingly opposing trends are shaping the world today: on the one hand, the universal appeal of technology as both the ultimate saviour and existential threat, and on the other, the centrality of returning to nature to meet the challenges of climate change. 

With the promise of automation called Artificial Intelligence (AI), we are ready to imagine a world that transcends, for better or worse, humanity. This image of technology reminds us of the magical thinking around natural phenomena—except this time the object of worship is itself man-made. We seem to be slowly walking towards technological animism. At the same time, climate change—a clear existential threat—is being addressed mainly as a “problem to be solved” with “targets to be met,” via a managerial approach with undertones firmly rooted in the extractivist worldview.

When AI is used as a tool for understanding and mitigating climate change, we must first understand how we conceptualize “nature” and what the effects of this perspective on how we apply AI models to the goals at hand are. When AI is presented as a solution for creating a harmonious and peaceful society with widespread wealth and zero risk, we are not only one step away from dystopia, but also heading vertiginously towards a state of total surveillance. The dilemma is: How can we address the use and abuse of AI in a contextual manner?

Matter matters

The Fourth Industrial Revolution has brought a false sense that our world—at least the part of it that matters economically—is virtual. AI amplifies this impression, creating the belief that everything can become dematerialised sooner or later, from financial markets to life itself. The virtual seems to be the new real, from cocktails and sex to education and spirituality. Misleading metaphors such as “the cloud” perpetuate this perception. We often neglect the fact that every byte of data is material, needs support to move, uses space to be stored, and consumes energy during processing. Without the material support of servers and cables, the workers to build and install them, the technicians to care for them, and the tons of water and energy to cool them, as well as the constant reference to this material world, the virtual one would not exist.

The COVID-19 pandemic is the latest reminder of the importance of this materiality. Beyond remote working and its idiosyncrasies, the economy grinds to a halt if the actual movement of goods and people stops. It is only after three years into the pandemic that we can see how different levels of society are affected by it in different ways. Digitisation hides the materiality of the virtual world and also shifts the costs of that materiality to the global South, fuelling the narrative of “clean, green capitalism.” By maintaining the illusion of immateriality, AI confirms it.

Language is of crucial importance in this type of knowledge based on the abstraction of reality. For example, one of the major discussions in AI modelling is that of “reality drift”—a term used when the basic model used in the design of the algorithm running an AI system, despite continuous learning, no longer corresponds to the ever-changing reality, causing it to malfunction. When we say “reality drift,” we unwittingly already create the expectation that reality should constantly behave according to the model. This language in itself indicates a possibly distorted or inverted view of “reality” as something that should correspond to the patterns we create to make it predictable and therefore exploitable.

This way of knowing, which seems natural throughout modernity, operates a fundamental break in the flow of things, so to speak. Essentially, in order to measure anything, we first re-create it according to our interest and separate it from the rest of the system. In other words, this kind of thinking extracts a fragment of a phenomenon, renders it measurable, and re-presents it as the truth about that phenomenon.

Body matters

AI and its effects on different perceptions of the human body are both ubiquitous and almost never analysed. The body, however, has a double reciprocal relationship with AI systems: the various constructs of the human body, from medical to racial, are the driving force behind the design and development of AI, while the human body itself and its perception are altered as a consequence of the widespread use of AI.

How do different conceptions of the ideal human body interact in the design and application of AI? And how does the use and misuse of AI reflect on our understanding of the human body? Our bodies become an aggregate of data about our daily calorie intake, daily steps, minutes of electronically enhanced attention, muscle mass or heart rate while, say, exercising on a treadmill.

This perception is shaped by, and in turn shapes, the standards embedded in a wide range of applications, from wellness to the healthcare industry. AI is reordering and reshaping our relationship with our bodies. On the one hand, bodies are becoming data generation hubs for the digital economy that brings the health and wellness industries under its umbrella. On the other hand, we are beginning to perceive our bodies as vehicles for constant improvement; resources to be tapped into in our quest for “growth,” a better job, a better partner or, self-reflexively, a higher state of well-being. The physiology of the body becomes an economy of constant self-improvement.

This economy comes to a sudden halt, however, when faced with death. AI and biotechnology whisper that it doesn’t have to be that way. The perception of finitude as a problem is a cultural peculiarity of Western modernity. Throughout the short history of modern thought and scientific practice, this perception has generated endless efforts to overcome death. The most recent attempt at immortality is present in the promises of AI to transcend the human body and download consciousness into a machine, as in Greg Egan’s cyberpunk novels, while biotechnology promises the possibility of eternal rejuvenation of the same body. The seduction of immortality eventually creeps into our portable devices through the apps we compulsively download, obsessively use, and rarely delete, even if we happen to forget about them.

Knowing nature

AI applications come not only with the promise of supremacy over our bodies, but also the promise of total control over nature itself. We want to believe that AI can somehow give us the power to master nature by simply measuring and managing it. But how have we come to believe that nature is separate from us, measurable, and manageable? And how is this belief reflected in the way we generate knowledge about and understand nature today? What do we consciously or unconsciously leave out in order to build our models/algorithms that generate predictions which are regularly proven wrong?

We rely on collecting massive amounts of data and imagine that it is equivalent to knowing what is happening in nature. Measuring wind speed and rainfall in a storm leads us to create complex classifications that allow us to characterise the phenomenon and assign numbers to it: a Category 3 hurricane or a Category 4 typhoon—signs that carry the mirage of management and control. They almost become lullabies; we know what to expect. This is a very special kind of knowledge that has been handed down to us from the tradition of modern Western thought, and is based on the presumption of predictability and complex probabilistic causal chains. Since the Age of Enlightenment, numbers, rationality and nature have been inextricably linked in Western scientific thought. Numbers have a history, and they came to mean “truth” at a very specific moment, the Renaissance and the adoption of Arabic numerals; data, facts and reality were brought together under the roof of natural laws with the simple statement “this is how nature works.” The history and core of this belief has rarely been questioned, despite numerous calls for both an awareness of the biases inherent in algorithm design and a more ethical use of AI.

What happens when we relate to nature as a measurable entity in a quasi-digitized world? The same thing that happens to our bodies when they disintegrate into datasets that are then displayed on our portable screens. Fundamentally, the first relationship with nature is actually the relationship with the perception of one’s own body.

And in this techno-scientific paradigm, both nature and our bodies deceptively become manageable resources to be exploited for other things (usually profit, pleasure, and power over others and ourselves). What we lose sight of is that, beyond measurement, our bodies are ourselves and we are nature.

Surveil and reward

Where does nature meet “human nature”?

Today’s humanity is the by-product of the modern invention of nature through a system of surveillance, measurement, classification, and monitoring. We have repositioned the newly found humanity between the anvil of nature as the healing realm par excellence and the threatening hammer of the very same nature. We want our products to be organic, bio, we want to grow our own food, but we want to do it in an urban garden. We argue that we want a return to nature, but a nature that is cleansed of its wild side. We fear nature because it is a constant reminder of our own ephemerality, so we try to remove its traces.

On the one hand, we celebrate the return to nature as a remedy to the general malaise of our modern lifestyle, on the other hand, we feel the need to analyse and control any aspect of our behaviour that would throw us back to our “evil nature.” But what is it about our nature that frightens us? What is this beast that needs taming? The immediate answer would be the “crafty” behaviour to which some claim we are naturally prone. From the fear of “letting go” of our bodies that would naturally gain weight or decay, to letting go of our supposed predatory, individualistic instincts that supposedly drive many automatically into delinquency, the struggle against this nature is pervasive. Our devices analyse our behaviour, internally and externally, in the same way we analyse our natural environment with justified anxiety and worry about the effects of climate change. We seem to think that this whole system will keep us all normal and ensure the perpetuation of life within our current parameters.

The other possible answer is that deviation from the imposed norm actually scares us because it is only a symptom, an indicator of the possibility of death. Deviation from the norm of capitalism is death itself: death as the finitude of the human body, death as planetary extinction caused by climate change, death as the end point in the complex system of debt and finance.

The current consensus is that we are living globally in the era of surveillance capitalism which takes a variety of forms and names. Unlike the classical view, which associates surveillance with punishment, in today’s capitalism instant gratification or promised rewards replace punishment.

In techno-capitalism, the reward system is designed both to generate data and to domesticate through constant surveillance, and is in fact a reflection of our relationship with the concept of nature—both human and non-human.

If nature has historically been a product of observation and classification, our aspiration to build a perfect society returns politics to nature as we have defined it—the famous natural order. Unfortunately, human classifications are not neutral, on the contrary. Inequality and hierarchy are embedded in them.

As nature seems surveilled, measured, and necessarily saved from climate change by data-generating sensor networks, so we are re-generated into surveillance systems to be saved from our own nature—and ultimately to be made immortal. The argument here is that the surveillance and reward system in today’s population governance has the ultimate goal of erasing finitude through oblivion. In the process, we recreate and operate with highly problematic classifications of human beings and re-naturalize them with technology. Furthermore, the planetary system is a complex and open system that cannot be approached with the tools appropriate for closed complicated systems as we do now. Which leads to a paradox: if we kill death, we kill nature.

Where to, now?

Climate change cannot be managed in an extractivist paradigm, using AI technologies to regulate the natural system. This idea is a mirage generated by the illusion of complete knowledge if only we extracted as much data as possible, combined with the belief that models overlap perfectly with reality and that by managing the model we will manage reality. The truth is that sooner or later reality will deviate from the model. As systems theorists say: “All models are wrong. Some of them are useful.” The secret is to discern which of the models are useful and applicable. For now, in the face of AI, we behave like a toddler with a hammer. All objects become nails. Moreover, we blame the hammer, looking for the spirit, the ghost in the machine, because linguistically it is already intelligent (a misnomer), so by default it should have a consciousness. The news is this: One, we will have to look hard to tell apart nails from fragile objects in front of the AI hammer; and two, we can’t let the hammer identify the nails, nor the AI identify its own purpose. That’s because the ghost in the machine is us. If we want AI to form its own consciousness, we will have to first develop our own.

We would like to thank Ioana Miruna Voiculescu for her useful proofreading and suggestions to ensure style consistency and improve readability across the texts published in English.
Scroll to Top