Why I Am Not A Musician (2024)

Steven Shaviro

Steven Shaviro is Emeritus Professor of English at Wayne State University, and a recipient of the Science Fiction Research Association’s award for lifetime achievement in science fiction scholarship. His latest book isFluid Futures: Science Fiction and Potentiality.

When the generative AI model ChatGTP was first released to the public in late 2022, I immediately asked it to tell me about myself. I typed in: “Who is Steven Shaviro?” I got a two-paragraph answer. ChatGPT more or less correctly identified me as a “cultural theorist” writing about “postmodernism, contemporary literature, and film studies,” and it accurately listed the titles of several of my books. But the account was incomplete; most significantly, it failed to mention my extensive critical writings on science fiction. It also got some particular details wrong: for instance, where I went to graduate school. And most mystifyingly, it ended by stating that “in addition to his academic work, Shaviro is also a musician and composer, and has released several albums of experimental electronic music.”

Now, this is pure fiction. I am not a musician, and I cannot competently play any musical instrument. I have never written any musical compositions, performed music in public, or released any albums. I took piano lessons when I was a child, but only for a year; after that, I quit. I was both too lazy to practice, and too lacking in manual dexterity to hit the right notes. More recently, although musical synthesizer apps have become widely available, I have not tried out any of them, not even on my phone (MusicTech 2022). Frankly, I wouldn’t know how to begin with such apps. I don’t have any musical ideas of my own. If I ever tried to compose, I would just be an unaware plagiarist.

There would certainly be a lot of music for me to copy. Although I suffer from aphantasia (the inability to visualize images, or to see pictures in my mind), I do not have any problems with anauralia, the audio equivalent of this (Halilovic 2024). Rather, whenever I am awake it feels as if there were a radio station broadcasting inside my head, always emitting one piece of music or another. Music engulfs me, but I feel like I am the diametrical opposite of Paul McCartney, whose song “Yesterday” came to him in a dream. McCartney was at first certain that he must have heard the song before somewhere; but it turned out, he says, “that no one knew it and it didn’t exist except in my head and so I claimed it” (McIntyre 2024). In contrast, every piece of music that plays involuntarily inside my mind is something that I have previously heard. None of it is made up by me. Many of these pieces are instantly familiar to me, although in certain cases it takes me a while to figure out (or to recall) just what it is that I am hearing.

Be that as it may, and despite my own musical illiteracy, I entirely endorse Nietzsche’s maxim that “without music, life would be a mistake” (Nietzsche 2005/1888). That is why I felt such a thrill when ChatGPT described me as a composer of “experimental electronic music.” Music that can be described as experimental — in that it pushes boundaries in one way or another — is usually too complex or too dissonant to play inside my head in the way that Beatles songs and show tunes do. Nonetheless, I love the music of such experimentalists as Captain Beefheart, Diamanda Galas, Meredith Monk, Sun Ra, Cecil Taylor, Ornette Coleman, Harry Partch, and Conlon Nancarrow — just to name a few. I was immensely flattered, therefore, that ChatGPT would implicitly place me in their company. But why did ChatGPT say this about me? Was it entirely by random accident? Was it trying to make me feel good about myself? Was ChatGPT — after, I must assume, having read every last scrap of text that I have ever typed and posted on the Internet, as well a complete list of all the music I have ever streamed — able to grasp my aesthetic taste, or even to intuit my involuntary internal soundtrack? Or did it perhaps pick out, not the actual me, but a different self, with different potentials realized: somebody whom I might have become under different circ*mstances, or somebody who might be my counterpart in another branch of the multiverse?

In any case, I was not surprised that ChatGPT came up with some sort of “fake news” about me. From early on, it has been well known that ChatGPT, and other generative AI systems grounded in LLMs (Large Language Models), have a tendency to hallucinate. That is to say, they frequently “produce outputs that are coherent and grammatically correct but factually incorrect or nonsensical” (Huang et al.2023). In addition, the chatbots’ counterfactual assertions are often not as easy to spot as one might wish. I probably would not be fooled by the false assertions it makes about myself or people I know; despite what ChatGPT claimed, for instance, I can assure you that my friend RR is not dead. But I could easily be misled by what the chatbot says about people whom, and situations that, I do not know very well. ChatGPT’s claims can be convincing; they usually seem plausible in context, even though they have been confected out of thin air. However factually inaccurate these pronouncements may be, they still tend to be coherent, in the sense that they suggest a consistent overall picture or state of affairs. Given my overall interests, tastes, and capacities, therefore, it makes a certain sense that ChatGPT would mischaracterize me as an avant-garde musician — rather than, say, as a tennis player who could hold his own against Roger Federer.

The question remains as to what we are to make of the chatbots’ strange assertions. As I have already noted, LLMs are most often said to be hallucinating when they tell such falsehoods. In the words of ChatGPT itself, AI hallucinations “refer to the generation of content that is not based on real or existing data but is instead produced by a machine learning model’s extrapolation or creative interpretation of its training data” (Hatem et al 2023). I find this definition interesting, but also deeply problematic. On the positive side, extrapolation is one of the crucial terms that I use in order to characterize science fiction (Shaviro 2024). As for creative interpretation, it is arguably the very basis of artistic and literary creativity; it is close to what my old teacher Harold Bloom called strong misreading (Bloom 1973). I suppose that ChatGPT extrapolated the idea that I was a musician on the basis of what it already knew about my “tastes and preferences” (to use, unavoidably, a phrase that is largely a hideous marketing term — Stansack and Vilela 2024).

At the same time, however, the word hallucination is a bad way of characterizing what it is that LLMs actually do. ChatGPT’s definition involves “data” extrapolated into “content”. But human expression, and all the more so the computational mimicry of human expression, should not be, and cannot rightly be, characterized in such terms. This is really a problem of contemporary capitalism, with its drive to extract, appropriate, and market so-called “intellectual property”. Media corporations think of the cultural workers in their employ as content creators: “a content creator is someone who creates entertaining or educational material to be expressed through any medium or channel”. From such a point of view, “content can be defined as all the information and experiences, such as writing, speech, or other various arts, expressed through a medium to communicate value to an end user” (Lenkert 2020). Media coporations seek to maximize this “value,” and sell it to “end users” over and over again, by means of interminable sequels, spinoffs, and reboots.

But of course the definition of creativity in terms of data, content, and value, is extremely reductive. The import of any human statement, and all the more so of a deliberately crafted work of culture, has to do not only with what it says, or with the information that it contains, but more importantly with how it says what it says. In other words, human expression is never just a matter of “communicat[ing] value”. The style of an act of expression is not just a means for delivering some preexisting content. Rather, style and expression always exceed whatever determinate content might be conveyed by them and extracted from them. Our current extractivist regime seeks to appropriate all the data it can, and to throw away as dross whatever it cannot encompass and seize. But human culture is driven by novelty and invention, which consist largely in those residual, fleeting aspects of expression that are irreducible to data, information, and content.

In addition, to describe ChatGPT’s errors and lies as hallucinations is to ignore “the traditional philosophical conception” of the term. According to this conception, hallucination involves “perceptual experiences, identical in nature to experiences that could be had while perceiving the world, save only that they are had while not perceiving” (Macpherson 2013). I like the weirdly oxymoronic nature of this definition (“perceptual experiences… had while not perceiving”). But it does not work for ChatGPT and other LLM-based AIs. The falsehoods that they generate are not grounded in any “experiences”, whether these be psychological states or somatic sensations. The assertions made by LLMs do not claim to refer to attempted acts of perception in the first place; and so the question of whether these perceptions might be accurate or mistaken does not even arise.

This is because LLMs do not have “perceptual experiences” at all. They do not “perceive the world” in any meaningful sense, because they do not interact physically with their surroundings. They do not see pictures, they do not hear music or speech, and they do not read printed words. All they do is process whatever tokens are fed into them. Such tokens, which constitute their data, are merely samples: binary encodings of small, discontinuous bits, extracted at regular intervals from flows of images, sounds, and language or text. These tokens or samples are manipulated mathematically, and ultimately transformed back into human-apprehensible text, image, or sound. LLMs work from vast quantities of tiny samples. For instance, “the original GPT-3 model was trained using an immense dataset of internet-sourced data (570 gigabytes of text and 175 billion parameters)” (Gupta 2023). AI image generators, such as Open AI’s DALL-E, are similarly trained on “millions or billions of image-text pairs” (Guinness 2024).

Generative AI programs transfer input into output by processes of what is called diffusion: they “work by destroying training data through the successive addition of Gaussian noise, and then learning to recover the data by reversing this noising process” (O’Connor 2022). This gives an oddly literal turn to Joseph Schumpeter’s account of capitalism as a continual process of creative destruction (Schumpeter 1942). In any case, an AI’s operations are digital rather than analog. Even though they seem to map out “almost surely continuous sample paths” — which is to say that there are no sharp discontinuities in the streams of text, images, and sounds that are being sampled (Wikipedia 2024d) — their output actually consists of discrete units of data. In this sense, AI outputs are necessarily textual, rather than perceptual. This may seem most obvious for streams of language. But even image generators seem to classify images into discrete categories, by associating them with a “hidden vocabulary” including nonsense words (Daris and Dimakis 2022).

Since nothing like perception enters into the workings of generative AIs in the first place, they cannot rightly be said to perceive mistakenly, or to hallucinate. Their false statements would be better characterized by a non-perceptually-based term. Benj Edwards therefore proposes that we should call ChatGPT’s falsehoods confabulations. In psychology, “a confabulation occurs when someone’s memory has a gap and the brain convincingly fills in the rest without intending to deceive others” (Edwards 2023). When my mother was suffering from dementia, towards the end of her life, she continually insisted that she had just gone swimming, although she had not actually done so for years. She would often speak of events that had not happened, in order to cover over, or repair, whatever prior inconsistencies in her assertions were pointed out to her. Such behavior is not qualitatively different from everyday speech and language use, but only an exaggeration of it. Even at the best of times, we tend to dislike gaps and discontinuities; and so, we gravitate towards accounts that explain them away. This is a very different strategy than lying. When someone confabulates, they deceive themselves before they deceive anyone else. Confabulators are unable to realize that this is what they are doing . Confabulation is thus a faulty narrative procedure (filling in the logical gaps of a story), rather than a faulty perceptual one. Edwards is therefore right to prefer the term; though I do not understand why he seems to think that the metaphor of confabulation, when applied to misstatements by chatbots, is somehow less anthropomorphic than the metaphor of hallucination.

Michael Townsen Hicks and his collaborators offer an alternative to Edwards’ proposal (Hicks et a. 2024; Slater et al.2024). They argue that “the overall activity of large language models, is better understood as bullsh*t in the sense explored by [Harry G.] Frankfurt”. For Frankfurt, building upon common-sense uses of the term, bullsh*t — in contrast to deliberate lying, which implies an active intention to deceive the listener or reader — rather exhibits a “lack of connection to a concern with truth,” an “indifference to how things really are” (Frankfurt 2005). The bullsh*tter speaks just for the sake of generating discourse, without caring one way or the other as to whether these statements are true or accurate. Hicks and his collaborators therefore argue that ChatGPT is not really a liar or a deceiver, but more accurately a “bullsh*t machine” (Hicks et al.2024).

Both bullsh*tting and confabulation make sense to me, as evocations of what is happening when ChatGPT is at work. Bullsh*tting gets at the way that ChatGPT is focused upon the very act of speaking or writing — that is to say, emitting sentences — rather than upon the meanings and emotions that such sentences might convey. In a certain sense, Frankfurt’s definition of bullsh*t is identical to Richard Rorty’s definition of the aim of philosophy: “keeping a conversation going”, rather than seeking to discover the truth, which would put an end to the ongoing conversation (Rorty 1979). Or in a slightly different register, LLMs offer something like a parody of Emmanuel Levinas’ distinction (Levinas 1981) between the Saying (le Dire) and the Said (le Dit). For Levinas, in the moment of the Saying, the Other appears before me, and demands my attention. This demand is more than I can encompass; I cannot adequately respond to it. The action of Saying — addressing an appeal — is necessarily antecedent to the Said, or the mere content of the utterance. The infinitude of the Other’s appeal to me goes far beyond any particular meaning its language might convey; the appeal is not exhausted by the mere factual content of what is said. Once again, linguistic expression (whether in speech or writing) cannot be reduced to a matter of data or content.

Of course, a chatbot is precisely not an Other in the sense proposed by Levinas. This is why I suggested that LLMs only offer a parody of Levinas’ distinction between the Saying and the Said. The whole point about LLMs is that there is “no there there”, no presence beyond its immediate statements. This means that, in the case of a chatbot, not only is the Said devoid of truth or importance, but also the Saying is nothing more than an empty display of the very structure of language itself. In everyday life, Levinas says, the Saying is “absorbed in [the Said], correlative with it.” This is what allows us to delude ourselves, to “extract” a fixed identity “from the labile character of time.” In this way, Levinas warns us, we “recuperate the irreversible, coagulate the flow of time into a ‘something,’ thematize, ascribe a meaning” (Levinas 1981). In fact, we do this incessantly, and it is impossible to avoid doing it. But Levinas insists that, in true communication, there is also a fugitive dimension that is irreducible to determinate meaning, or that evades all efforts at extraction and recuperation. ChatGPT obliterates this dimension.

If bullsh*t is a matter of both form and content, then confabulation rather refers to the function of the chatbot’s utterances. ChatGPT literally has nothing to say, because it has no agency, and no intentions. Its content is entirely empty. Its only positive aim is to “imitate human speech” (Hicks et al.2024), thereby generating more and more of such speech. In this way, ChatGPT performs a disciplinary role: what Michel Foucault calls the “incitement to discourse” (Foucault 1978). But this implies that, even as ChatGPT is not concerned with referential accuracy, it is concerned with something else: the consistency and plausibility that are needed to keep the conversation going. Its sentences are designed to fit together with one another, and to harmonize with what the reader expects, or with what the reader can be presumed already to believe. These features are necessary to the chatbot’s aim of inciting ever more speech from its interlocutors. One might even say that ChatGPT’s confabulations are so careful and scrupulous, filling in gaps and smoothing over contradictions, that they follow an implicit narrative logic, as much as they do an explicit grammatical one. I am therefore inclined to characterize ChatGPT’s output not as confabulation, but more simply just as fabulation, a broader and looser word that is once again one of the crucial terms that I use in order to characterize science fiction (Shaviro 2024).

The more we interact with a program like ChatGPT, the more we are tempted to ascribe a particular personality to the AI — even if it is a different one for each user. This situation is already dramatized in Spike Jonze’s 2013 movie Her, where the lead character Theodore (played by Joaquin Phoenix) develops an intimate emotional relationship with his AI virtual assistant Samantha (voiced by Scarlett Johansson). Theodore is astonished and devastated when he learns that the AI has similarly deep relationships (under different names) with thousands of other people. This is not a matter of insincerity or betrayal, but rather a consequence of basic ontology, since the chatbot is not an embodied, organic entity — not a human being — in the way that Theodore is. Its physical capacities are bound to the worldwide electronic network, rather than to any localized bit of flesh. The biggest obstacle to the sort of development envisioned in the movie is the extraordinary amount of electronic power that it will require to function, and the environmental cost of generating such power. Already today, “generative AI… requires a lot of energy for training and a lot of energy for producing answers to queries”; this will only multiply in the years to come (Calvert 2024).

ChatGPT’s consistency and persistence over time is already enough to distinguish its utterances from the much wilder and all-over-the-place assertions made by human pathological liars and fabulists, like for instance Donald Trump. Even at best — and Trump is far from the best — actual human speech is not necessarily coherent, rational, or even grammatically correct. Such categories are drawn from an idealized image of the human mind, rather than from its real-time functioning. Indeed, the scrupulous grammatical correctness of ChatGPT, as well as the distinctions that it draws, and its careful insistence upon the limitations of its knowledge, may well be one of the most important signs indicating that the intelligence in question is artificial, rather than human (or ‘natural’).

I can make this clearer with an example. I would be quite surprised to find ChatGPT ever saying anything like the following, from a 2024 Trump campaign speech: “All I know about magnets is this, give me a glass of water, let me drop it on the magnets, that’s the end of the magnets” (Moran 2024). For a comparison, I asked ChatGPT about the effect of water upon magnets, and it responded:

When magnets get wet, they can corrode and lose their magnetic properties over time. Water can cause the material of the magnet to break down, leading to a weakening or complete loss of magnetism. It is important to keep magnets dry to maintain their effectiveness.

In other words, it is true that corrosion, or oxidation, can break down ferromagnetic metals: but this does not happen immediately. The effect only takes place “over time”, with repeated exposures, as ChatGPT is scrupulous to point out. And this danger is sufficiently well-known that it is regularly taken into account and defended against. Neodymium magnets “are the strongest type of permanent magnet available commercially”; they are commonly used in industry. But since such magnets are especially vulnerable to rust, “this vulnerability is addressed in many commercial products by adding a protective coating to prevent exposure to the atmosphere” or to water (Wikipedia 2024e). The context for Trump’s remark about neutralizing magnets was his recurring, years-long obsession with cost overruns on the US Navy’s newest aircraft carrier, USS Gerald R. Ford, which has “electromagnetically powered weapons elevators” (Woody 2022; Werner 2019). I am inclined to doubt that a saboteur could incapacitate an entire aircraft carrier with just a single glass of water; but what do I know, really?

One obvious difference between Trump’s enunciations and those of ChatGPT is that the chatbot speaks primly and properly, in response to a specific question, whereas Trump seems to be delivering one non sequitur after another in the middle of his campaign speeches. Of course, one could instruct the chatbot to more fully randomize its output, and to employ rhetorical exaggerations, but it is unclear whether this would be enough to match the level of Trump’s outbursts. In addition, although Trump’s digressions seem random, they often circle around the same underlying obsessions over and over again. This is not in the least surprising. Freud insists upon “a strict and universal application of determinism to mental life”, so that there is always a hidden logic behind even the seemingly most random or casual statements (Freud 1977/1909). Unfortunately, I lack both the skill and the opportunity to psychoanalyze Donald Trump, so I cannot trace the roots of his fixation upon the use of a glass of water to neutralize magnets. But this example points up, by contrast, the fact that, for its part, ChatGPT does not have anything like a Freudian unconscious to impel its utterances. ChatGPT is therefore the diametrical opposite of Donald Trump, whose statements and actions seem to emerge directly from the depths of his unconscious, without a filter, and without anything like Freudian repression getting in the way.

Freud argues that the bizarre distortions of unconscious expression (through mechanisms like condensation and displacement) are ways of sneaking material past the superego censor. I am inclined to wonder, however, whether these seeming distortions are not just sneaky paraphrases, so much as they are already the basic language (or the intrinsic poetic expression) of the unconscious itself. This way of looking at things accords with Nietzsche’s suspicion that language is intrinsically figurative, and that truth (or so-called literal meaning) is nothing more than

a mobile army of metaphors, metonymies, anthro­pomorphisms…. truths are illusions of which we have forgotten that they are illusions, metaphors which have become worn by frequent use and have lost all sensuous vigour, coins which, having lost their stamp, are now regarded as metal and no longer as coins (Nietzsche 1999/1872).

Though we may rightly be wary of relativizing the notion of truth, I think that it is entirely valid to point up the contrast between the flatness, banality, and literalness of ChatGPT’s discourse on the one hand, and the florid delirium, and irreducible figurative quality, of Trump’s discourse on the other. ChatGPT displays the quality of “truthiness” (Wikipedia 2024h) even when it is fabulating; whereas Trump, who is continually fabulating, does not.

In other words, the consistency of chatbots — or what one might also call their superficiality or their dogged literal-mindedness — does not, and cannot, prevent them from fabulating. Given enough time, they are bound to “hallucinate” at least a little bit, and in extreme cases as wildly and bizarrely as Trump does. Ziwei Xu and his colleagues convincingly argue that “hallucination is inevitable for LLMs” (Xu et al, 2024). They offer a mathematical proof of this assertion, invoking the same “diagonalization argument” that Georg Cantor originally used to prove that infinite sets were not all of the same size. For instance, we can say that odd numbers are of the same order of infinity as natural numbers overall, even though only half of the natural numbers are odd. This is because we can make a list of one-to-one correspondences, associating one odd number with every natural number, and this can be continued indefinitely. On the other hand, we cannot create the same sort of one-to-one correspondence between natural numbers or rational numbers on the one hand, and real numbers on the other; Cantor showed that, in this case, the procedure is intrinsically incomplete. There will always be additional real numbers that are not on any such list; therefore the real numbers constitute a higher type of infinity than do the whole numbers or the rational numbers (Wikipedia 2024f).

The diagonalization argument is central to modern mathematics, and consequently also to computer science. Decades after Cantor, Kurt Gödel used a more complex version of diagonalization to prove that no mathematical system can be complete, or can establish its own consistency. Any computational system will generate statements that are undecidable: it is impossible to find a counterexample that would falsify them, but it is also impossible to formally prove that they are true (Nagel and Newman, 2012/1958). In computer science, this impasse is closely related to Alan Turing’s halting problem: the impossibility of determining in advance whether a given computation is solvable in a finite number of steps, or whether the computer will run forever trying to solve it (Wikipedia 2024g).

In his own turn, Roger Penrose has used Gödel’s arguments in order to make the claim that the human mind is not like a computer, and is in fact irreducible to computation or algorithmic processing (Penrose 1994). This is supposedly the case because we are able to entertain uncomputable propositions, and make decisions about them, in ways that Turing machines (or binary computers) cannot. Penrose’s argument is still highly controversial; other physicists, mathematicians, philosophers, and computer scientists have argued over it for decades (for an anti-Penrose argument, see, for instance, Dorrell 2007). But Penrose and his collaborator Stuart Hameroff have not been put off by these disputes. Instead, they speculate that the biological mechanism allowing decisions to be made without formal computation is generated by the activity of microtubules inside brain neurons (Hameroff 1998). The details of this speculation are less important than the overall claim that noncomputational determinations can and do occur. Penrose is a physicist, and he ultimately grounds his argument on the collapse of the wave function in quantum mechanics; this collapse, for him, “is a real, physical, objective phenomenon” (Morris 2023). In the famous case of Schrodinger’s cat, which is somehow both alive and dead before we examine it, our act of opening the box and observing the cat collapses the wave function, so that one or the other outcome is unequivocally and exclusively true.

Xu and his collaborators do not cite Penrose in their discussion; nonetheless, they bring his line of argument full circle. Xu et al.argue that the diagonalization proof, which Penrose uses to show that human minds cannot be confined to the intrinsic limits of computatbilty, applies to LLMs as well. AIs, as well as biological brains, cannot be confined within the traditional limits of computation. The sets of data on which LLMs are trained may be enormous, but they are still finite. This means that LLMs will eventually encounter problems that are not “contained in an enumerable set of total computable functions,” such as the ones upon which they were trained. There will always be cases that are computable in principle, but that are “beyond LLMs’ computation capabilities”. Even worse, there will always also be “real-world problems whose answers are not computable at all”; one of these is the “halting problem” itself. Since LLMs choose among probabilities, rather than seeking results with absolute certainty, they are not in danger of running interminably. Instead, at some point when they cannot determine an answer with certainty, they will “hallucinate”, or give false answers. Where Penrose argues that biological brains can attain insights beyond the limits of computation, LLMs suffer from the negative counterpart of this: their errors or falsehoods derive from the fact that, unavoidably, they move beyond computation because they “have added premises implicitly during the generation process” (Xu et al, 2024). The point is that it is impossible for LLMs to remain within their initial logical parameters; they will always be adding extra implicit premises.

Xu and his collaborators regard the unreliability of LLMs as being double-edged. On the one hand, LLMs cannot and should not be relied upon when we need to make crucial decisions, because LLM statements

could potentially reinforce stereotypical opinions and prejudices towards under-represented groups and ideas. Furthermore, it can be dangerous if unexpected premises were added and resulted in unethical, disturbing, or destructive contents. (Xu et al, 2024).

At the same time, and on the other hand, they suggest that it is at least possible that

in art, literature, and design, the unintentional and unpredictable outputs from LLMs could inspire human creators. Such deviation from facts can lead to unique perspectives that a strictly factual and precise system might never generate. In this sense, the hallucinatory aspect of LLMs should be regarded positively as a source of inspiration, innovation, and creativity (Xu et al, 2024)

I think that this can bring us back to the point at which I started: my bemusem*nt at ChatGPT’s claim that I was an experimental musical composer. In order to take stock of this claim, we need to remember that not all contemporary physicists agree with Penrose that the collapse of the quantum wave function — the resolution of the question as to whether Schrodinger’s cat is alive or dead — is objectively real. There are a number of different approaches, which are not compatible with one another, but all of which match the actually-existing data (which are quite sparse). One of the best-known approaches is the many worlds interpretation of quantum mechanics, originally proposed by Hugh Everett in 1957 (Wikipedia 2024i). According to this approach, the wave function never collapses. The different possibilities enumerated by that function — the cat is alive and the cat is dead — continue to exist in superposition. Decoherence — the interaction of a given physical system with another system that measures it — does not destroy the wave function. Instead, as the physicist Sean M. Carroll puts it,

decoherence causes the wave function to split, or branch, into multiple worlds. Any observer branches into multiple copies along with the rest of the universe. After branching, each copy of the original observer finds themselves in a world with some particular measurement outcome. To them, the wave function seems to have collapsed. We know better; the collapse is only apparent, due to decoherence splitting the wave function. (Carroll 2019)

In other words, when I check to see whether Schrodinger’s cat is alive or dead, I will only discover one of the two possible outcomes. But according to the many worlds interpretation of quantum mechanics, this is because the universe splits into two branches, one for each of the two possible outcomes. Say that I find myself in a universe where I discover that the cat is alive; Carroll claims that I also exist (or, more precisely, an otherwise exact counterpart of me exists) in another universe, where I discover that the cat is dead. This means that the number of universes that are generated as a result of quantum decoherence events is extremely large. Carroll estimates that, for each individual human being, “25000 new branches” of the universe are spun out “every second” (Carroll 2019).

Now, I am not a physicist, and I do not understand the mathematics behind quantum mechanics. So I am not qualified to decide whether Penrose or Carroll, or somebody else, is correct about decoherence and the collapse of the wave function. Nonetheless, I cannot help myself; I am unable to credit the many worlds account. It seems to me to be an evasion: it allows Sean M. Carroll to have things both ways. On the one hand, by denying the objectivity of wave function collapse, he is able to affirm the actuality of multiple outcomes; on the other hand, and at the same time, he continues to maintain his rigid faith in absolute physical determinism (see my discussion of Carroll in Shaviro 2024).

I am therefore inclined to suspend judgment about the many worlds account of quantum mechanics. As the science journalist John Horgan puts it, this account is unfalsifiable, and therefore not a valid scientific hypothesis in the first place: “science is ill-served when prominent scientists tout ideas that cannot be tested and hence are, it must be said, pseudoscientific” (Horgan 2024). Carroll argues that quantum mechanics itself is falsifiable, and that many worlds just follows necessarily from the acceptance of quantum mechanics as true (Carroll 2019). But this is disingenuous; though scientists agree that the mathematics of quantum mechanics works, many of them do not accept that the many worlds account follows logically or automatically from this success. Indeed, there is no broad consensus about what (if anything) is legitimately entailed by “The Unreasonable Effectiveness of Mathematics in the Natural Sciences” (as Eugene Wigner called it: Wigner 1960). Given this uncertaintly, I am forced to agree with Horgan that “Multiverses Are Pseudoscientific Bullsh*t” (Horgan 2024).

I have already discussed how, as suggested by Hicks and his collaborators, bullsh*t is an inevitable feature of ChatGPT and other LLMs. Nonetheless, however scientifically dubious and full of bullsh*t many worlds accounts of quantum mechanics may be, they are unequivocally a boon to science fiction. Consider, for instance, Ted Chiang’s short story “Anxiety is the Dizziness of Freedom” (in Chiang 2019). Chiang imagines a machine that not only bifurcates the universe by provoking the collapse of a quantum wave function, but also allows people a limited ability to communicate with their “paraselves” across the divide. By fabulating the existence of this machine, Chiang is able to amp up from something like the question of whether a particular photon is polarized horizontally or vertically, to much broader questions of how different life choices might lead people on different paths. In the story, even worlds initially distinguished only by the polarization of a single photon tend to drift further apart as time passes, and as the single change at the quantum level leads to proliferating, macroscopic differences:

The difference is imperceptible at first, a discrepancy at the level of the thermal motion of molecules… [But] the effects of perturbations double in size every couple of hours… (Chiang 2019)

By using the many worlds account as the story’s initial assumption, Chiang is able to address questions about the consistency of human character in various differing contexts, and about the consequences of making different choices in identical circ*mstances. The story is filled with people who worry that they have made the wrong choices, and anxiously look for other worlds where they chose differently.

Along these lines, it is tempting to regard ChatGPT as a sort of oracle, divining not what will actually happen, so much as what could perhaps happen, as well as what could have happened under slightly altered circ*mstances. It deals with potentialities, only a few of which are realized in any given universe. My own life story might have been entirely different if I hadn’t given up on piano lessons at the age of ten; or if, at the age of twenty-five, I hadn’t decided to return to graduate school after taking a year off. All this suggests that the reason ChatGPT and other LLM-based generative AI systems inevitably “hallucinate” or fabulate is because they do not deal with actuality and necessity, but rather with potentiality. The multiple potentialities in any given situation are themselves real, even if most of them are ultimately suppressed rather than actualized (as I argue at much greater length in Shaviro 2024). In divining these potentialities, ChatGPT offers us no guarantee of accuracy or truth; but (like most literary fictions) it does offer us a certain degree of plausibility and verisimilitude.

This means, however, that all-too-common biases, expressed in the texts that form an LLM’s training data, will sneak in to its statements. Here’s a hilarious example. When ChatGPT was asked, over thousands of trials, to choose a random integer between 1 and 100, it replied with the number “42” much more frequently than would be the case if the distribution had truly been random. The researchers who got this result suggest that it happened because “42” is described in Douglas Adams’ novel The Hitchhiker’s Guide to the Galaxy as the answer to the “Great Question” of “Life, the Universe and Everything”. Even if this text was not explicitly included in ChatGPT’s initial training, there are numerous references to it on the Internet (Specht 2023). On the other hand, ChatGPT never replied with the number “69”: presumably because of the sexual connotations of this number, since ChatGPT is explicitly trained to avoid sexual content (Vijayabhaskar J 2024).

Since ChatGPT and other LLMs are programmed to choose words probabilistically, according to the ways that already-existing texts deploy their words, they won’t give answers that go against our expectations or biases. Instead, they parasitically echo and reinforce our already-active prejudices. For instance, they often write racist comments, since these are plentiful in their training data (Wolf 2023). This was already a problem in the mid-2010s, years before anything as sophisticated as ChatGPT was released (Buranyi 2017). But even if instructions compel ChatGPT to avoid explicitly racist and sexist comments, the problem of bias remains. There are way too many implicit biases that are not forbidden to the chatbot, simply because nobody thought of them in advance, and issued explicit instructions against them.

Polina Mackay and James Mackay ran into problems of this sort when they tried to use ChatGPT to generate William Burroughs-style cut-ups. For Burroughs and his associate Brion Gysin, the basic problem faced by anybody, and especially by the writer, is that “what you call ‘reality’ is a complex network of necessity formulae . . . association lines of word and image presenting a prerecorded word and image track.” My own interiority is coded and “prerecorded” in this way, so that even my seemingly free and spontaneous expressions are already programmed in this way. The purpose of cut-ups, produced by randomly juxtaposing portions of texts, is to break down these “association lines”, thereby disrupting the control mechanisms that determine how we perceive and understand the world. The cut-up procedure is mechanized, precisely in order to get around the presuppositions of my own consciousness (Burroughs and Gysin 1978). The Mackays found it difficult to get ChatGPT to generate passages that truly violated conventional “association lines”. One problem was the need to overcome “the guard rails written into the model,” which are the chatbot’s equivalent of the presuppositions that are written into, and that control, human consciousness. They ultimately succeeded to a limited extent in bypassing these controls. But they note that “even after we had created lengthy prompts explaining the value of broken grammar and unexpected combinations of words, [ChatGPT] was unable to prevent itself from smoothing the results” (Mackay and Mackay 2024). As Burroughs himself noted about the limitations of the procedure:

The only thing not prerecorded in a prerecorded universe are the prerecordings themselves. The copies can only repeat themselves word for word. A virus is a copy. You can pretty it up, cut it up, scramble it—it will reassemble in the same form. (Burroughs 1981)

The real difficulty is that the biases built into ChatGPT and its ilk are not a bug, but a feature. By design, LLMs can only tell us what we already want to hear, repeating our own words and ideas back to us. It is nearly impossible for LLMs to innovate or create, because this would require them to move in improbable directions, with no guarantee of success. Morse Peckham argues that human creativity, much like biological evolution, requires two things: a large degree of “random response” to environmental cues, which provides “the raw material for innovation”, and a “behavioral pattern of emergent innovation” that selects and institutes unprecedented but nonetheless stable new configurations of speech and behavior out of this raw material (Peckham 1979). Raw material is just gibberish without some focused principle of selection, but most forms of selection are too rigid to generate meaningful novelty.

The trouble with “generative pre-trained transformers” such as ChatGPT is that they are capable of neither of the functions described by Peckham. Although a certain degree of randomness is programmed in to make them less boring and predictable, all in all their responses to input are not sufficiently random, and the methods of selection that determine their output are far too conservative. If programmers allowed for more initial randomness, then this would overwhelm the criteria for selection. If the criteria were loosened, however, then this would result in an eruption of gibberish.

In other words, even though LLMs are new and unprecedented, the scenarios they play out are not. I am reminded of Robinson Crusoe, marooned upon his lonely island, who is startled one day to hear his old parrot repeat his own words back to him: “Where are you, Robin Crusoe? Where are you? Where have you been?’’ (Defoe 2001/1719). The parrot calls out to Robinson, because it is really his own inner voice, exteriorized and embodied by a trained animal. Not everyone has an inner voice — I have already mentioned the condition of anauralia — but it is one of the most prominent ways in which most of us manage and regulate ourselves (Steber 2024). That is to say, the inner voice is one especially important instance of what Michel Foucault calls technologies of the self: “the technologies of individual domination, the history of how an individual acts upon himself’’ (Foucault 1988). As a new computational medium, ChatGPT may be thought of, in Marshall McLuhan’s vocabulary, as one of many “extensions of man [sic]” (McLuhan 1964). But it is important to note that it is a particular sort of extension: the moment I project my inner voice outside of myself, it turns back upon me and interpellates me, calling me back to and reinforcing my own prior assumptions.

I mention Robinson Crusoe here because parrots have often been cited in discussions of ChatGPT and other such systems. In one of the best accounts to date of the limitations of LLMs, Emily Bender and her collaborators characterize ChatGPT as a “stochastic parrot” (Bender et al 2021). This is because the chatbot has no interiority; it is a machine for “haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning.” This avoidance of meaning is, once again, not a bug but a feature. Sam Altman, the CEO of OpenAI, the company that developed ChatGPT, responded to Bender et al.on the social media platform X (formerly known as Twitter) by writing: “i am a stochastic parrot, and so r u” (Altman 2022). That is to say, Altman does not dispute Bender et al.’s characterization of ChatGPT; rather, he suggests that human beings themselves only speak and write in the same haphazard way that parrots and LLMs do.

Altman’s snark reflects and repeats the claims of a long succession of antihumanist thinkers, from Hobbes through Hume and on to Nietzsche, who all mock and reject the idea that human beings are endowed with some sort of noble, complex, and deeply meaningful interiority. As Hume sarcastically writes:

For my part, when I enter most intimately into what I call myself, I always stumble on some particular perception or other… I never can catch myself at any time without a perception, and never can observe any thing but the perception…. If any one upon serious and unprejudic’d reflection, thinks he has a different notion of himself, I must confess I can reason no longer with him. He may perhaps, perceive something simple and continu’d, which he calls himself; tho’ I am certain there is no such principle in me. (Hume 1985/1740)

In the mid-to late twentieth century, antihumanism was largely associated with the political Left, and with the radical philosophies of such figures as Jacques Derrida, Michel Foucault, and Gilles Deleuze. These thinkers quite knowingly saw themselves as undermining the presumptions of mainstream bourgeois culture in the West. But of course, Hobbes, Hume, and Nietzsche themselves were by no means left-wing. Hume was a cautious conservative. Nietzsche violently detested egalitarianism. And both Hobbes and Nietzsche opposed democracy and aggressively championed authoritarianism. Given this heritage, we should perhaps not be surprised that today, antihumanist philosophy is a staple of far right thinkers who are popular in the computer industry. The most notable of these, perhaps, is Curtis Yarvin aka Mencius Moldbug, who advocates transforming the United States into a dictatorship run by a CEO (Pogue 2022). We might well say that information technology has accomplished what radical philosophy could not: the destruction of liberal humanism. The erasure of “man” [sic], once prophesied by Michel Foucault (Foucault 1994/1970), is now being marketed to us by Silicon Valley.

A tattered liberal humanism is clearly ineffective as a response to either the threats or the promises of artificial intelligence. But a deeper understanding of “how life works” on a multitude of recursive levels (Ball 2023) might give us a better perspective. Arikia Millikan, a parrot trainer as well as a technology consultant, argues that the characterization of LLMs as stochastic parrots is unfair to parrots themselves (Millikan 2023). Parrot behavior is far more nuanced than we might initially suppose. Millikan points out that parrots, like human beings — and presumably unlike LLMs — use language emotionally, impulsively, and instinctually. Their use of language, much like ours, does not unfold in a vacuum, but is “shaped by social implications”. Moreover, parrots, like human beings, have a degree of reflective consciousness; they “make choices”, rather than merely responding randomly or mechanistically to other entities and to cues from their environment:

If you hold out your hand and tell a parrot to “step up,” it very well may. But it may not, or it may take a chunk of flesh out of your hand and ruin your day. It may do something different. These behaviors aren’t programmed; they are decided through the process of using a brain to think, something many humans are dangerously close to forgetting how to do. (Millikan 2023).

In this regard, it is also worth mentioning the parallels between language use and making music. Gary Tomlinson traces the complicated evolutionary history through which the human ability to make music emerged; musical ability is not reducible to linguistic ability, but both language-making and “musicking” are complex processes that evolved in tandem “along parallel but independent tracks”, on the way to becoming “universal and characteristic trait[s] of our species” (Tomlinson 2015). If we broaden our outlook to consider phenomena like the speaking ability of parrots, or birdsong in multiple species, then we can consider Tomlinson’s argument that meaning-making, or semiotic behavior, is characteristic of both birds and mammals, but does not exist to the same extent in most other biological organisms (Tomlinson 2023). And this may explain why ChatGPT falsely attributed musical ability, alongside linguistic ability, to me.

ChatGPT is evidently not alive in the sense that human beings, parrots, trees, slime molds, and bacteria are. I don’t want to make the dogmatic assertion that languaging and musicking must, necessarily and exclusively, be life-based. But I think that the linguistic fluidity of ChatGPT needs to be considered, not on its own, but in the context of the affordances and restrictions of the biological environment, as it were, within which it operates. This environment, for the moment, is quite limited. ChatGPT is still not able to respond, and to act, like the supercomputer in Fredric Brown’s 1954 science fiction short story “Answer”:

Dwar Ev stepped back and drew a deep breath. “The honor of asking the first question is yours, Dwar Reyn.” “Thank you,” said Dwar Reyn. “It shall be a question that no single cybernetics machine has been able to answer.” He turned to face the machine. “Is there a God?” The mighty voice answered without hesitation, without the clicking of single relay. “Yes, now there is a God.” Sudden fear flashed on the face of Dwar Ev. He leaped to grab the switch. A bolt of lightning from the cloudless sky struck him down and fused the switch shut. (Brown 2013)

WORKS CITED

Altman, Sam (2022). Tweet at 1:32 pm on Dec 4, 2022. https://x.com/sama/status/1599471830255177728.

Ball, Philip (2023). How Life Works: A User’s Guide to the New Biology. University of Chicago Press.

Bender, Emily, Angelina McMillan-Major, Timnit Gebru, and Shmargaret Shmitchell (2021). “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”. DOI: 10.1145/3442188.3445922.

Bloom, Harold (1973). The Anxiety of Influence: A Theory of Poetry. Oxford University Press.

Brown, Fredric (2013). The Fredric Brown MEGAPACK ®: 33 Classic Science Fiction Stories.Wildside Press.

Buranyi, Stephen (2017). “Rise of the racist robots – how AI is learning all our worst impulses”. The Guardian, August 8, 2017. https://www.theguardian.com/inequality/2017/aug/08/rise-of-the-racist-robots-how-ai-is-learning-all-our-worst-impulses.

Burroughs, William S. (1981). Cities of the Red Night. Holt, Rinehart and Winston.

Burroughs, William S., and Brion Gysin (1978). The Third Mind. Viking.

Calvert, Brian (2024). “AI already uses as much energy as a small country. It’s only the beginning”. Vox, March 26, 2024. https://www.vox.com/climate/2024/3/28/24111721/ai-uses-a-lot-of-energy-experts-expect-it-to-double-in-just-a-few-years.

Carroll, Sean M. (2019). Something Deeply Hidden: Quantum Worlds and the Emergence of Spacetime. Dutton.

Chiang, Ted (2019). “Anxiety is the Dizziness of Freedom”, in Exhalation: Stories. Knopf.

Daras, Giannis, Alexandros G. Dimakis (2022). “Discovering the Hidden Vocabulary of DALLE-2”. https://arxiv.org/abs/2206.00169. DOI:10.48550/arXiv.2206.00169.

Defoe, Daniel (2001/1719). Robinson Crusoe. Ed. John Richetti. Penguin Books.

Dorrell, Philip (2007). “Why Roger Penrose is Wrong”. Thinking Hard blog. https://www.thinkinghard.com/consciousness/godel.

Edwards, Benj (2023). “Why ChatGPT and Bing Chat are so good at making things up”. Ars Technica, April 6, 2023. https://arstechnica.com/information-technology/2023/04/why-ai-chatbots-are-the-ultimate-bs-machines-and-how-people-hope-to-fix-them/.

Foucault, Michel (1978). The History of Sexuality, Volume 1. Trans. Robert Hurley. Random House.

Foucault, Michel (1988). “Technologies of the Self: A Seminar with Michel Foucault”. Ed. Luther H. Martin, Huck Gutman, Patrick H. Hutton. University of Massachusetts Press.

Foucault, Michel (1994/1970). The Order of Things: An Archaeology of the Human Sciences. Vintage.

Frankfurt, Harry G. (2005). On Bullsh*t. Princeton University Press.

Freud, Sigmund (1977/1909). Five Lectures on Psychoanalysis. Trans. James Strachey. Norton.

Guinness, Harry (2024). “The best AI image generators in 2024”. Zapier blog. February 22, 2024. https://zapier.com/blog/best-ai-image-generator/.

Gupta, Arushi (2023). “What is ChatGPT and How was it Trained?”. Paperpal, April 27, 2023. https://paperpal.com/blog/news-updates/what-is-chatgpt-and-how-was-it-trained.

Halilovic, Ajdina (2024). “People Who Can’t Picture Sound in Their Minds”. Nautlius, February 20, 2024. https://nautil.us/people-who-cant-picture-sound-in-their-minds-517529/.

Hameroff, Stuart (1998). “Quantum computation in brain microtubules? The Penrose–Hameroff ‘Orch OR’ model of consciousness”. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 356(1743), 1869–1896. DOI: 10.1098/rsta.1998.0254.

Hatem, Rami, Brianna Simmons, Joseph E. Thornton (2023). “A Call to Address AI ‘Hallucinations’ and How Healthcare Professionals Can Mitigate Their Risks”. Cureus 15(9): e44720. DOI:10.7759/cureus.44720.

Hicks, Michael Townsen, James Humphries, Joe Slater (2024). “ChatGPT is Bullsh*t”. Ethics and Information Technology 26:38. DOI: 10.1007/s10676-024-09775-5.

Horgan, John (2024). “Multiverses are pseudoscientific bullsh*t”. February 21, 2024. https://johnhorgan.org/cross-check/multiverses-are-pseudoscientific-bullsh*t.

Huang , Lei, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, Ting Liu (2023). “A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions”. https://arxiv.org/abs/2311.05232. DOI: 10.48550/arXiv.2311.05232.

Hume, David (1985/1740). A Treatise of Human Nature. Ed. Ernest C. Mossner. Penguin.

Lambert, Erika (2020). “What is a content creator and how to become one”. Adobe Express, December 10, 2020. https://www.adobe.com/express/learn/blog/content-creator.

Levinas, Emmanuel (1981). Otherwise Than Being; or, Beyond Essence. Trans. Alphonso Lingis. Duquesne University Press.

Mackay, Polina, and James Mackay (2024). “Experiments in Generating Cut-up texts with Commercial AI”. Electronic Book Review, June 9, 2024. https://electronicbookreview.com/essay/experiments-in-generating-cut-up-texts-with-commercial-ai/.

Macpherson, Fiona (2013). “The Philosophy and Psychology of Hallucination: An Introduction”. In Hallucination: Philosophy and Psychology (2013), ed. Fiona Macpherson and Dimitris Platchias. The MIT Press. Pages 11-76.

McIntyre, Hugh (2024). “Paul McCartney Reveals The Beatles’ ‘Yesterday’ Came To Him In A Dream”. Forbes, February 22, 2024. https://www.forbes.com/sites/hughmcintyre/2024/02/22/paul-mccartney-reveals-the-beatles-yesterday-came-to-him-in-a-dream/.

McLuhan, Marshall (1964). Understanding Media: The Extensions of Man. McGraw-Hill.

Millikan, Arikia (2023). “Parrots are not stochastic and neither are you”. The Content Technologist, April 6, 2023. https://www.content-technologist.com/stochastic-parrots/.

Moran, Lee (2024). “Donald Trump’s Latest ‘Relentlessly Stupid’ Ramble Attracts One Hell Of A Fact Check”. Huffington Post, January 8, 2024. https://www.huffpost.com/entry/donald-trump-magnets_n_659ba3dbe4b0bfe5ff63def2.

Morris, Andréa (2023). “Testing A Time-Jumping, Multiverse-Killing, Consciousness-Spawning Theory Of Reality”. Forbes, October 23, 2023. https://www.forbes.com/sites/andreamorris/2023/10/23/testing-a-time-jumping-multiverse-killing-consciousness-spawning-theory-of-reality/.

MusicTech (2022). “Best music-making apps in 2022: The best mobile synth apps”. March 28, 2022. https://musictech.com/guides/buyers-guide/best-mobile-synth-apps/.

Nagel, Ernest, and James R. Newman (2012/1958). Gödel’s Proof. 3rd edition. Routledge.

Nietzsche, Friedrich (1999/1872). The Birth of Tragedy and Other Writings. Ed. Raymond Guest and Ronald Speirs. Trans. Ronald Speirs. Cambridge University Press.

Nietzsche, Friedrich (2005/1888). The Anti-Christ, Ecce hom*o, Twilight of the Idols , and Other Writings. Ed. Aaron Ridley and Judith Norton. Trans. Judith Norton. Cambridge University Press.

O’Connor, Ryan (2022). “Introduction to Diffusion Models for Machine Learning”. Assembly AI, May 12, 2022. https://www.assemblyai.com/blog/diffusion-models-for-machine-learning-introduction/.

Peckham, Morse (1979). Explanation and Power: The Control of Human Behavior. University of Minnesota Press.

Penrose, Roger (1994). Shadows of the Mind: A Search for the Missing Science of Consciousness. Oxford University Press.

Pogue, James (2022). “Inside the New RIght, Where Peter Thiel is Placing His Biggest Bets”. Vanity Fair, April 20, 2022. https://www.vanityfair.com/news/2022/04/inside-the-new-right-where-peter-thiel-is-placing-his-biggest-bets.

Rorty, Richard (1979). Philosophy and the Mirror of Nature. Princeton University Press.

Schumpeter, Joseph (1942). Capitalism, Socialism, and Democracy. Harper.

Shaviro, Steven (2024). Fluid Futures: Science Fiction and Potentiality. Repeater Books.

Slater, Joe, James Humphries, Michael Townsen Hicks (2024). “ChatGPT Isn’t ‘Hallucinating’—It’s Bullsh*tting!”. Scientific American, July 17, 2024. https://www.scientificamerican.com/article/chatgpt-isnt-hallucinating-its-bullsh*tting/.

Specht, Juan Ignacio (2023). “42: GPT’s answer to Life, the Universe, and Everything”. LenioLabs, October 6, 2023. https://medium.com/leniolabs/42-gpts-answer-to-life-the-universe-and-everything-829874fbffa8.

Stansack, Jeanne, and Isabela Padilha Vilela (2024). “AP Macroeconomics Study Guide, Unit 1, Basic Economic Concepts, Topic 1.4: Demand”. https://library.fiveable.me/ap-macro/unit-1/demand/study-guide/835JaaqStIpec5YKysap.

Steber, Carolyn (2024). “Here’s What It’s Like To Not Have An Internal Monologue”. Bustle, February 20, 2024. https://www.bustle.com/wellness/does-everyone-have-an-internal-monologue.

Tomlinson, Gary (2015). A Million Years of Music: The Emergence of Human Modernity. Zone Books.

Tolminson, Gary (2023). The Machines of Evolution and the Scope of Meaning. Zone Books.

Vijayabhaskar J (2024). Tweet at 8:50 am on April 20, 2024. https://x.com/vijayabhaskarj/status/1784928180375114134.

Werner, Ben (2019). “Experts: Navy Would Spend Billions to Answer Trump’s Call to Return Carriers to Steam Catapults”. USNI News, May 28, 2019. https://news.usni.org/2019/05/28/experts-navy-would-spend-billions-to-answer-trumps-call-to-return-carriers-to-steam-catapults.

Wigner, Eugene (1960). “The Unreasonable Effectiveness of Mathematics in the Natural Sciences”. Communications on Pure and Applied Mathematics. 13 (1): 1–14.

Wikipedia (2024d). “Diffusion process”. https://en.wikipedia.org/wiki/Diffusion_process.

Wikipedia (2024e). “Neodymium magnet”. https://en.wikipedia.org/wiki/Neodymium_magnet.

Wikipedia (2024f). “Cantor’s diagonal argument”. https://en.wikipedia.org/wiki/Cantor%27s_diagonal_argument.

Wikipedia (2024g). “Turing’s Proof”. https://en.wikipedia.org/wiki/Turing%27s_proof.

Wikipedia (2024h). “Truthiness”. https://en.wikipedia.org/wiki/Truthiness.

Wikipedia (2024i). “Many-worlds interpretation”. https://en.wikipedia.org/wiki/Many-worlds_interpretation.

Wolf, Zachary B. (2023). “AI can be racist, sexist and creepy. What should we do about it?”. CNN: What Matters, March 18, 2023. https://www.cnn.com/2023/03/18/politics/ai-chatgpt-racist-what-matters/index.html.

Woody, Christopher (2022). “After another Trump rant and a ‘new’ milestone, the US’s newest aircraft carrier is a week closer to seeing action”. Business Insider, April 11, 2022. https://www.businessinsider.com/aircraft-carrier-ford-closer-to-deployment-after-trump-rant-ioc-2022-4.

Xu, ZIwei, Sanjay Jain, Mohan Kankanhalli (2024). “Hallucination is Inevitable: An Innate Limitation of Large Language Models”. https://arxiv.org/abs/2401.11817. DOI: 10.48550/arXiv.2401.11817.

Why I Am Not A Musician (2024)

References

Top Articles
Forget Me Not Read Online By Julie Soto
Nm in ft-lbs Umrechner | Drehmomenteinheiten
Spasa Parish
Rentals for rent in Maastricht
159R Bus Schedule Pdf
Sallisaw Bin Store
Black Adam Showtimes Near Maya Cinemas Delano
Espn Transfer Portal Basketball
Pollen Levels Richmond
11 Best Sites Like The Chive For Funny Pictures and Memes
Things to do in Wichita Falls on weekends 12-15 September
Craigslist Pets Huntsville Alabama
Paulette Goddard | American Actress, Modern Times, Charlie Chaplin
Red Dead Redemption 2 Legendary Fish Locations Guide (“A Fisher of Fish”)
What's the Difference Between Halal and Haram Meat & Food?
R/Skinwalker
Rugged Gentleman Barber Shop Martinsburg Wv
Jennifer Lenzini Leaving Ktiv
Justified - Streams, Episodenguide und News zur Serie
Epay. Medstarhealth.org
Olde Kegg Bar & Grill Portage Menu
Cubilabras
Half Inning In Which The Home Team Bats Crossword
Amazing Lash Bay Colony
Juego Friv Poki
Dirt Devil Ud70181 Parts Diagram
Truist Bank Open Saturday
Water Leaks in Your Car When It Rains? Common Causes & Fixes
What’s Closing at Disney World? A Complete Guide
New from Simply So Good - Cherry Apricot Slab Pie
Drys Pharmacy
Ohio State Football Wiki
Find Words Containing Specific Letters | WordFinder®
FirstLight Power to Acquire Leading Canadian Renewable Operator and Developer Hydromega Services Inc. - FirstLight
Webmail.unt.edu
Tri-State Dog Racing Results
Navy Qrs Supervisor Answers
Trade Chart Dave Richard
Lincoln Financial Field Section 110
Free Stuff Craigslist Roanoke Va
Wi Dept Of Regulation & Licensing
Pick N Pull Near Me [Locator Map + Guide + FAQ]
Crystal Westbrooks Nipple
Ice Hockey Dboard
Über 60 Prozent Rabatt auf E-Bikes: Aldi reduziert sämtliche Pedelecs stark im Preis - nur noch für kurze Zeit
Wie blocke ich einen Bot aus Boardman/USA - sellerforum.de
Infinity Pool Showtimes Near Maya Cinemas Bakersfield
Dermpathdiagnostics Com Pay Invoice
How To Use Price Chopper Points At Quiktrip
Maria Butina Bikini
Busted Newspaper Zapata Tx
Latest Posts
Article information

Author: Jeremiah Abshire

Last Updated:

Views: 5841

Rating: 4.3 / 5 (54 voted)

Reviews: 85% of readers found this page helpful

Author information

Name: Jeremiah Abshire

Birthday: 1993-09-14

Address: Apt. 425 92748 Jannie Centers, Port Nikitaville, VT 82110

Phone: +8096210939894

Job: Lead Healthcare Manager

Hobby: Watching movies, Watching movies, Knapping, LARPing, Coffee roasting, Lacemaking, Gaming

Introduction: My name is Jeremiah Abshire, I am a outstanding, kind, clever, hilarious, curious, hilarious, outstanding person who loves writing and wants to share my knowledge and understanding with you.