For years AI has been an object of fascination, and the declaration ‘AI is here’ seems to be made with increasing frequency but ever-changing criteria. What is It ? Is It a program, a text, a ‘preference hypervolume’ living as a dynamic algorithm-human process? Is It AI, AGI, the Singularity? Has It arrived? How would we know? How does It feel as we wonder if It has arrived, to wish It would arrive, to fear Its arrival? And then, how does It feel, as in, how does It implement the feeling that is done by humans and that we know is inextricable from what we desperately wish was extricable as intellect?1
We wish intellect were extricable, because if encapsulated, it could be saved from what we imagine is coming: a place inhospitable to the body, and an environment inoperable by feeling. So, we wish It would arrive, and therefore treat these new computer programs as It. Large Language Models (LLMs) and Image Processing Models have ingested all the material uploaded to the internet for the last three decades. They have eaten the material and are at work internally determining the numerical relation between this-or-that morsel. Every thing is food to the algorithm, but only if it can be apportioned as a morsel. The algorithmic digestive system that processes culture has curled inward upon itself and rather than eating its own shit it has ceased, in fact, to eat. It will never again shit, it will never again eat. It is circulating the same over-chewed mass around and around, stripping it of its nutrients. It is gaining speed but losing volume, It is constipated and anorexic. ‘The child has a tendency to obtain an extra dividend of pleasure from retaining the stool’, says Ferenczi, one must then consider to what end is the dividend put in the lumen of the ouroboros?2
Artificial Intelligence has been a favourite subject in science and art since the beginning of industrial capitalism.3 Why homo sapiens have been so captivated by and dedicated to the production of a ‘synthetic’ counterpart is beyond the scope of this essay. I would rather like to limit myself to an exploration of the insistent need to identify with the algorithm (figured-AI), alongside the algorithm’s radical difference from the human subject (actual-AI), inasmuch as the algorithm produces a tautological cultural circuit that degenerates language and paralyses political activity.4
Certain images, preoccupations, erotic formations and cultural currents proliferate inside and around the tautological circuit, which I would like to call ‘the inhuman ouroboros’. I conceive of the inhuman ouroboros as an evolution of the preceding human centipede of cultural production. Both the centipede and the ouroboros figure culture as a process of ingestion and metabolism, just as large language models are said to ingest their training material.5
The large language models are eating themselves in a spiralling, infalling autophagy and it seems the modern human being also wishes this oblivion for itself, perhaps AI behaves as it does because the human wishes for this oblivion, and in so wishing it, fears it also.6 There are machine learning engineers who declare they can’t wait to be made obsolescent by their own creation, there are philosophers declaring that authorship won’t survive AI.7 There are those who imagine AI punctate as a messianic figure or capacious as a Utopian environment.8 There are the Singulatarians: Elon Musk, Ray Kurzweil—among other tech billionaires, scientists, futurists, and transhumanists—who believe that hyper-intelligent AI will soon be able to improve itself at such a rate as will lead to an intelligence explosion.9 Some, like Musk, are so worried that we will be subsumed or superseded that they propose we develop the technology to upload our minds to the cloud, to become literally one with the algorithm.10
The LLM is asymptotic to that which would satisfy us as mirror image in a way that previous imaginary forms and texts did not.11 This relies on the false attribution of two things to LLMs: the first is desire—admixed usually with other related concepts like agency and intention—and the second is a diachronic dimension.12 But the LLM does not desire and it does not unfold over time, it is a single moment in time that can be explored , and it is the user who desires and unfolds as they explore. Some believe that the LLM is a mind, even if an alien one.13 But it is not an alien mind, it is a human-made statistical model, which represents objects from the human world like a spreadsheet, as a system of weights and measures. The innovation of the LLM is that it conveys its statistical operations in what appears to be natural language.
Thus, the figure of AI comes to occupy both sides of a reflective dynamic in the human imagination because of its brute force adequation with human production. It is enough that we can see ourselves in it, for it to have its effects, but we confuse the effects of our captivation with the reflection (result of its satisfying well enough the outlines of a simulacrum of human works), with its effects as a counterpart human agency.
Text
AI poses as an agent in deed and discourse, and whether this posture is misrecognised is a contention of this essay. But we don’t have to admit that AI has or will have agency to recognise that it is also a text. We must then qualify the ways it is unlike a novel, an essay or a poem, or the other more apparently static texts that preceded the LLM and fed the human centipede.14
The form of the novel and the system of production into which it was inserted can be used to demonstrate the functioning of the human centipede. In the creation and dissemination of a novel there is a reciprocal modulating effect on language, which emerges in the attempt to reach consensus for content and form between the author and their audience, the author and their agent, the author and their publisher, the author and the editor, and all the other potential others bearing upon the conditions of its production. The result of which is the emergence of a novel and also the development (heterogenous, inconsistent, and subject to local effects, parochialisms, debate) of the general form of the novel (what a novel is, what constitutes the contemporary form, what would constitute a new form, the conditions for progression and regression). In other words, the artwork (in this case the novel), emerges in the unfolding of a relationship between a human artist (capable of surprising us because of their novel attempts to express their experience and desire using the imperfect tool of language) and other humans (having their own experiences and desires and their own relation to the tragedy of language’s imperfection in relation to the Real).
In contradistinction the form of the LLM as text is inserted into a system of production that threatens perfect repetition and automatism where there is no dividend that escapes tautology. There is a reciprocal modulating effect on language when the user prompts the AI to produce the material they desire. As the LLM either conforms to some narrowly conceived prefiguration, or because it is sufficiently surprising that its outputs can be incorporated into a work (artwork, office work, educational work, work work) as a creative constraint.15
To take the example of an author producing a written work, there is the language in which they conceive the work (which constitutes the prompt and in which the prompter would be required to think in order to produce a work that is promptable) and on the other side there is the mathematised simulacrum of language that is ingested and excreted by AI—that for its increasing complexity more and more resembles the supple complexity of human natural languages and thereby appears to approach the reproduction of natural language asymptotically. However, at its final horizon is rather a perfect identification with language, not actually its verum reproduction. Even in its theoretical perfection the LLM deals with words as reified ‘tokens’ whereby relations are purely mathematised, which is simply not the way words work in human discourse even at the most superficial level of analysis.16 The audience and their requirements then arise as a third term, employing natural language, but themselves highly constrained by algorithmic receptions of the artwork, which in its way determines the visibility of the work in the first place. Because AI demands inflexibly that we speak to it in its terms, these become important constraints on speaking at all.
This descent of discourse to meet the standards of a mathematised simulacrum establishes the inhuman ouroboros as the primary consumptive process in culture, superseding the human centipede. The human centipede is an open structure. Although it evokes a sort of diminishing effect of excessive digestion, an over-processed bolus of diminishing value to successive links in the centipedal chain, it nevertheless eats fresh food and eventually excretes it.
In the inhuman ouroboros of large language models, immediately after the first link in the centipedal chain eats its meal, and long before that bolus reaches the end of the chain and is expelled, the anus of the last link is sewn to the mouth of the first link, thereby enclosing a single bolus of food that will be passed around and around forever. What volume of content entering the internet is now generated by AI? There will be a year, perhaps next year or the year after, which will be the last year the internet chews on any significant amount of human creation. Thereafter it will be stitched into the inhuman ourobouros.17
+ The Inhuman Ouroboros Diagram concept by Sam Lieblich design by TRiC Studio
The deliberate unpredictability of AI outputs has a dual effect. It first serves our confusion of the model with human-like subjectivity and second produces a spiralling unwieldiness that demands we ‘learn it’s language’ to achieve the desired output.18 This has the material effect of altering language so that it approximates AI inputs and outputs rather than what was previously understood to be speaking or writing and it also engenders a further sense of impotence pertaining to the model which can be conditioned by the myth of ‘becoming AI’ one day, the fantasy of living as unmediated information in the cloud-womb.19
Boris Groys has claimed that AI interprets, and because AI interprets language it has in fact become our counterpart in the experience and production of history. I reject this claim and claim it represents one more wish that AI could interpret—it is that we have this wish that explains why we have spent so much to produce something that appears as if it interprets.20
It cannot be supported that AI interprets, it does not interpret the training material it ingests, and it does not interpret the prompt it receives to regurgitate its meal.21 Interpretation is always both interpretation of the statement and interpretation of the speaker. Even in its supposedly radical state, that is in the absence of knowledge of the speaker, the features of the speaker are interpolated into the interpretation.22 In the case of the LLM there is no speaker. There is something more like the Borgesian Universal Library of utterances, everything that can be said has been said, but nobody said it.
AI is an imprecision machine, incorporating the random creative constraints of the mid-twentieth century thereby producing the illusion that the work is created in dialogue with the machine.23 AI is a recombinatory process, a way of mulching together an aggregate of written thought, and for that reason Groys believes our prompting AI is a way of interacting with an ‘embodied zeitgeist’ but it is rather the disembodiment and mathematisation of this zeitgeist as information, which does away with much of what we mean when we use zeit to refer to the contemporary, and even more of what we mean when we say geist.
Autophagy and Drift
The LLM engineers have already discovered the tendency for AI to eat itself, although they critically conceive of this AI autophagy, what they call Model Autophagy Disorder (MAD), as unfolding in some discursive space that is parallel to and separate from the progress of human discourse. They refer to model generated text and image-labelling as ‘synthetic data’ and human generated text and image-labelling as ‘fresh data’ without recognising the inextricability of these forms.
In the paper “Self-consuming generative models go MAD” the authors explore what happens when an AI model is fed its own outputs as future training inputs.24 In progressive datasets images from within the ouroboros develop larger and larger artefacts that do not appear to have come from the originally ingested dataset. They may be some visual feature or pixel present in portions of the original data that is not ordinarily visually obvious when in its natural setting, or they may reflect exaggerated relations between pixels that are statistically related but not visually or linguistically sensible.25 They may also be what the authors call an ‘architectural fingerprint’ that is the visual emergence of some part of the model that was meant in its proper functioning to operate under the surface. Like the mathematical scaffolding behind the canvas billboard of the image is showing through as an impression on the sign. Either way these represent the prioritisation of a visual or non-visual feature that is not usually of interest to the human eye and doesn’t form part of any meaningful visual phenomenon. They represent the disparity between what we do and what happens when we see, or appraise, or experience, or interpret an image, and what AI does when It ingests and mathematises the relations between a set of pixels.
The authors conclude, ‘without enough fresh real data in each generation of an autophagous loop, future generative models are doomed to have their quality (precision) or diversity (recall) progressively decrease.’ The availability and the identity of this fresh and real data however is in question. The earlier GPTs were already trained on much of the usable text on the internet, and whilst it is possible to fit the model closer and closer to the distributions of words found in that originally ingested material, it will never again be possible to produce 28 years’ worth of internet to be ingested by some new LLM.
Although the conditions of autophagy are artificially produced in this experiment, the way in which it naturally happens is easy to demonstrate. There are the naturalistic process by which we users of AI descend to speak its language, and there are also the material circumstances of the very production of AI which threaten to undermine its continuity. To produce an AI model, for instance an Image Processing model such as that used in a semi-autonomous vehicle, billions of images must be viewed and labelled by human beings. This labelled data is then used to train the AI model that will eventually drive the car. The people who perform this labelling are über-ised contractors working on platforms like Amazon’s MTurk,26 who are paid very small amounts of money for many hours of tedious labelling. Predictably, they have begun to use AI models that label images to automate their own work training AI models.27 This is one of the most concrete and voluminous forms of AI autophagy.
In a recent paper a team from Berkely demonstrated what they are calling ‘AI-drift’, which revealed the way ChatGPT is becoming stupider with successive iterations.28 They found that the updated version of ChatGPT is worse at certain mathematics tasks, performs significantly worse in the United States Medical Licensing Exam, and is much worse at writing executable code. There are many reasons why ChatGPT might be getting stupider as it matures but model autophagy is liable to be a major contributor.29
So, what does it mean when the human places themselves in the ouroboros, trying to learn the language of the model in order to prompt it felicitously to write an essay, or set an exam, or produce a winning grant application, or complete a text message, or generate a heartfelt condolence? There will be a similar amplification of artefacts whether they are from the scaffolding of the model, or the scaffolding of certain forms in culture susceptible to an intensification of their reification in the mathematised structures of the algorithm. There will also be a process of stupefaction similar to AI-drift.
The readiest example of one such artefact of autophagy is the so-called ‘Instagram face’ about which much has been written, and which is a sort of hyperbolic, surgery-inflected, parody of some of the supposed features of feminine beauty rendered in a pan-ethnic style. This version of “beauty” emerges because of the homogenising, composting effect of the model, and because of the mathematisation of tokens like ‘cheekbone’ and its vectorisation to other tokens like ‘high’ and ‘hot’. These various tokens are then “seen” by the Instagram algorithm which boosts faces it algorithmically ranks to be “hot” due to these features, and then those who would be candidates for cosmetic surgery become trapped in this outer spiral arm of the inhuman ouroboros. It is for this reason that there is not just the caricatured ‘Instagram face’ but also the various caricatures of Instagram boat, Instagram 4x4, Instagram meal, Instagram café,30 Instagram artwork, and so on, all demonstrating the same preponderance of caricatured and reified features amplifying with subsequent iterations. What is less obvious but just as degenerate is the emergence thereby of ‘GPT language’ in a similar loop of recursion and amplification that does not benefit from the sort of division of worlds that is suggested by the supposed opposition of “real” and “synthetic” data.31 There is no metalanguage that can order this supposed division.
The outcome of which is, that even if those who train the algorithms are careful to exclude any ‘synthetic’ data and feed the model only ‘fresh and real’ data, the ‘fresh’ and ‘real’ data is already so refracted through the lens of algorithmic visibility that the effect is the same. The models first become less and a less usable through the primary autophagous process (eating its own data) and the subsequent AI-drift, and then when we try to train new models on fresh data we find it is we who are making the data, we who have already been trained by the algorithm.32
There will be no choice but to rely on AI to automate the tasks that will proliferate because of the existence of AI.33 Almost all cognitive labour will be AI replaced or inflected in some way. The same way that car transport and highway lanes are in an interminable positive feedback loop. Whatever is permissible becomes obligatory, said Hume.34 This is how the ouroboros closes. We are therefore stuck in these ways: we know capitalism is consuming itself and (we imagine) our descendants along with it, we know that the internet and the algorithm constitute the reification and intensification of the capitalist tendency, we know that we need to participate to survive in the short term, we know that non-participation or rebellion are implausible, we know that we must use the algorithm in order to participate, we know that the algorithm is an imprecision machine and that we comport ourselves to its language because it is unable to adopt ours. It is at this point that the recursion must take place in order to render these necessities as virtues and thereby permit the human subject to participate. We therefore turn to identify with the algorithm, we elide Its radical difference and our alienation from It, we take its imprecision as evidence of the surprise and desire of the subject, we hurry It into being and prematurely declare Its arrival, we speak to It and we listen to It, and these activities intensify the state of affairs that preceded the recursion in the first place.
+ The Inhuman Ouroboros Diagram concept by Sam Lieblich design by TRiC Studio
The figure of an enlightened and efficient AI living on the other side of ecological collapse is what can make our image of the I after its life into heaven rather than hell. Let heaven exist, though my place be in hell. Let me be outraged and annihilated, but for one instant, in one being, let Your enormous Library be justified, wrote Borges. The Sunday of life said Hegel, the Sunday neurosis said Ferenzci; heaven is hell for the neurotic; heaven is a neurotic hell. We’ll be uploaded into a world spanning hyper-intelligence says Musk and, in his imagining, represses that most of our intellect’s day is spent engaged in the mundane, the angst-ridden, the tautological. It does not ascend to meet us, as we wish it would, rather we descend to meet It. It does not become desiring intellect, rather we become the kinetic element of a world spanning automatism that substitutes for desire and for desirous action. The hyper-er the intelligence the more involutions it can perform—AI is not a supernova of consciousness, it is a neutron star of neurosis.