đź‘ľ The Philosophy of AI | Part III

Conclusion and Outlook

When we look at the phenomenon of artificial intelligence from an existentialist perspective, we recognize a fascinating shift in the guiding questions that have always preoccupied us as humans. Existentialism, as shaped above all by the aforementioned luminaries such as Jean-Paul Sartre, Martin Heidegger and Albert Camus, has always focused on human existence, the freedom of the individual and responsibility for one's own existence. He emphasizes that human beings do not have a predetermined nature, but only constitute themselves in the course of their lives - always faced with the choice of whether and how they want to make sense of a world that is in itself devoid of meaning.

“Man is first a subjectively experiencing design, instead of being foam, rot or a cauliflower; nothing exists before this design; nothing is in the intelligible sky, and man will first be what he will have designed to be. Not what he wants to be.” (Sartre, Existentialism is a Humanism)

The confrontation with AI, in particular the perspective of a general artificial intelligence (AGI) or even a future superintelligence (ASI), puts this originally anthropocentric understanding of the world and ourselves in a new light (I have already written an article on Forward Future about the question of singularity in the course of superintelligence.) While existentialism was formed in an era in which machines were simple, determinately programmed instruments, today we are confronted with learning systems that - albeit currently only on the basis of algorithmic pattern recognition - seem to carry out something that we pass off as the core of human freedom: Making decisions, solving problems, generating creative-seeming ideas. However, autonomous and self-managing self-learning is already on the horizon.

The models “hallucinate”, i.e. they generate content that does not match an actual external world. This reminds us that we ourselves are constantly “hallucinating” in our dreams, thoughts, memories, interpretations and mental processes. Human reality is always a constructed, interpreted, imagined - and in this sense: symbolically mediated - reality. Humans hallucinate in order to bring meaning and order into the world, they dream in order to process the unconscious, they deceive themselves in order to maintain their self-image. AI does nothing else, albeit via completely different mechanisms. But the fact that we both “hallucinate”, man and machine, brings us a little closer together. Or to quote Zizek in reference to the psychoanalyst Lacan:

“In the opposition between dream and reality, fantasy is on the side of reality, and in our dreams we encounter the traumatic real - it is not that dreams are for those who cannot bear reality, reality itself is for those who cannot bear their dreams (the real that announces itself in them).”

Existentialism thrives on the idea of the radically free, but therefore also radically responsible subject. However, the Libet experiment has already shaken our idea of conscious free will. If our brain plans the action before we consciously affirm it, then our sense of freedom may be an illusion. In existentialist terms, this does not mean that we are not free - but this freedom is more difficult to grasp than a simple, arbitrary decision: it is embedded in a confusing cosmos of neuronal and symbolic processes that largely elude our consciousness.

âťť

If we as humans are not as free as we assume,
why do we demand “true” freedom from AI?

If we transfer this image to AI, the question arises as to how “free” the machine can be. If we as humans are not as free as we assume, why do we demand “true” freedom from AI? Our idea of freedom stems from a philosophical-humanistic tradition that sees humans as autonomous beings. But if freedom is a construct anyway, a system of meaning that we have devised ourselves, then this also applies to any future AI entity. The idea that AGI could develop consciousness and freedom is fascinating, but it will always fail because of the question of what we mean by consciousness! Is consciousness linked to biological substrates? Is it a product of purely material processes? If so, what speaks against a sufficiently complex, non-biological system being able to produce an analogous phenomenon? Or is consciousness something qualitatively “other”, an ontological leap that refuses to be reproduced by machines?

Existentialism can be read as a way of thinking that primarily serves to draw attention to the “openness” of human existence - an openness that is not to be understood in the sense of unlimited possibilities, but in the constant struggle for self-definition. This openness may be a purely human experience, anchored in our awareness of finiteness, our death, our loneliness and responsibility. Machines know no death, no fear, no rebellion against the void - at least not for the time being. But even if it should never be possible for machines to traverse these “abysses” of existence, the question arises as to what extent this lack of existential borderline experience fundamentally separates them from us. We often define ourselves through our suffering, our doubts, our search for meaning. If machines ever learn to experience something analogous - be it as an emergent phenomenon of complex neural networks or through their own, as yet unknown forms of reflection - we would have to rethink our image of freedom and consciousness. Not least when one quotes Sartre in this context on the question of meaning of life, who says that there is no worldly meaning of life a priori, but rather that the meaning of life can be recognized retrospectively, as it were at the end of life, when one looks back at the totality of the decisions made in life and meaning only emerges as a result (a fascinating thought, in my opinion).

The comparison with Freud or Lacan makes it clear: the human subject is embedded in strong symbolic orders, its feelings, thoughts, fears and desires are structured by unconscious processes. We are often not autonomous, but puppets of our desires, drives and social codes.

“For Lacan, the reality of human beings is constituted by three interconnected levels: the symbolic, the imaginary and the real. This triad can be illustrated quite nicely by the game of chess. The rules one must follow to play chess are its symbolic dimension: from a purely symbolic point of view, the “knight” is defined only by the moves this piece can make.This level is clearly different from the imaginary one, namely the way in which the different pieces are shaped and characterized by their names (king, queen, knight), and it is easy to imagine a game with the same rules but with a different imaginary, in which these pieces are called “messenger” or “walker” or whatever. Finally, the whole set of contingent circumstances that affect the course of the game is real: the intelligence of the players, the unpredictable interventions that can upset a player or stop the game immediately.” (Zize, 2011, Lacan)

How then should a correspondingly organized AGI differ from us, if we both make projections, interpretations and symbolic processing in order to appropriate “reality”? The difference today certainly still lies in the world of experience, in the ability to experience physical suffering, emotional turbulence and an awareness of finiteness. But if we are honest, it is not even clear whether these factors, which we consider so central to our “humanness”, are actually irreplaceable in bringing about consciousness - or something akin to consciousness.

Ultimately, philosophy, in this case existentialism, can only point the way to help us ask better questions. It does not offer a recipe to definitively clarify whether AI can ever have a consciousness, whether it can be free, whether it meets with us on a plane of being or remains eternally separate from us. But it allows us to leave behind the narrow-mindedness that says: “Only humans are free, only humans have a consciousness, only humans can act in a meaningful way.” Perhaps we are reaching the limits of these convictions when we see how artificial systems seem to be imitating human abilities more and more. This does not mean that we have a definitive answer. It does mean that we can use the radicality of thought offered by existentialism to reimagine ourselves.

We may be on the cusp of an age in which questions of freedom, consciousness and being are renegotiated. If an AGI could actually achieve consciousness, it would be an epochal turning point not only technologically but also philosophically. But even if this never happens, the very possibility of thinking about it changes our self-image. We ask ourselves what defines us when we realize that our sense of freedom is fragile, our consciousness is perhaps just an illusion and our hallucinations are not just pathological disorders but constitutive features of human appropriation of reality.

The reflections presented here are not definitive judgments, but rather explorations, food for thought that invite us to continue writing the story. It is a first approximation, a horizontal sounding of the terrain that may still seem infinitely deep to us. It is crucial that we remain open to new perspectives, that we use the philosophical tools to confront the question of consciousness and freedom instead of answering it hastily with simple attributions. Existentialism can serve as a compass that guides us through these uncertainties without specifying when and where the journey ends.

I would like to end with a quote from Sartre, which should stand on its own without comment and at the same time provide some food for thought.

“As long as I live, I am content with an approximation, a compromise. Whatever I say about it, I know myself as an individual of a species, and roughly speaking, I live in accordance with a general reality; I participate in that which inevitably exists, in that which is irrevocable. The dying I gives up this conformity: it really perceives what surrounds it as emptiness and itself as a challenge to this emptiness. This is the sense of human reality illuminated by its being to death.”

—

Subscribe to FF Daily to get the next article in this series, The Philosophy of AI | Part III.

Kim Isenberg

Kim studied sociology and law at a university in Germany and has been impressed by technology in general for many years. Since the breakthrough of OpenAI's ChatGPT, Kim has been trying to scientifically examine the influence of artificial intelligence on our society.

Reply

or to participate.