Writing, thinking, and talking about “artificial intelligence” makes me want to kill myself, but if I did, I think I’d be giving it what it wants.
Before 2022, I didn’t have to worry about AI, and I didn’t want to. Two years into the COVID-19 pandemic, and as a writer and a high school English teacher, I was already engaged in an invisible war against so many other forces I wasn’t prepared to fight. Social media (especially TikTok) combined with the isolating effects of the pandemic and the weird, reactionary politics that isolation birthed seemed to be morphing the brains of my students and of many of the people around me in dangerous and disorienting ways.
Most conversations with my students and other young people were significantly shorter and less intimate after 2020, and getting them to pay attention to anything longer than two to four minutes became more and more impossible as time went on. They were noticeably less social, less curious about the world around them, and even less willing than ever to risk their time on “hard work” that might not produce the results they so desperately desired. Nothing appeared to be “worth it” to them — “it” being some nebulous nothingness in and of itself, some expanse of time they couldn’t even define, some moments of their day they believed belonged to anything but engaging with their humanity. Right as the mental and emotional fatigue was setting in, oligarch-in-training Sam Altman and his team at OpenAI released ChatGPT from its shackles and let it stomp on what was left of the ground we were able to maintain.
From there came an onslaught of rave reviews about the power of AI and its ability to “simplify” our lives, and my students were far from the only people hooked on what AI “provides” them. People all around me began making excuses for integrating ChatGPT and other AI clients into their lives. Teachers were using AI to help build out lesson plans and grade papers; administrators at school insisted we teach students to use AI “responsibly;” writers were experimenting with the software to “see what it does;” acquaintances were using ChatGPT to get new recipes, edit their Instagram captions, and write emails for them; and AI-generated training applications were making their way through the weightlifting community.
By their accounts, AI is a tool that helps them save time, brain power, and energy. But nobody is using AI to improve the decaying material conditions of our lives. Without even mentioning the environmental catastrophe the continued development and use of AI is causing, it becomes clearer every day that AI is taking more than it is giving. Because of its current limitations, we can only actually use AI to do one thing: provide a temporary respite from having to try and to fail. We can only use it to be less human. And ultimately, that is the point.
While the horrors of what AI is doing to our brains, our planet, and to the bank accounts of already astronomically rich sociopaths are enough to give it that title, you might say I was primed to absolutely hate this shit from the very beginning. I grew up and came of age in the late 1990s and early 2000s, which means my first introduction to AI came by way of watching films like Disney’s Smart House, The Terminator franchise, RoboCop, Tron, Blade Runner, The Matrix, I, Robot, and a long list of others. These films either position “artificial intelligence” as something a specially skilled group of human beings can and should destroy or attempt some level of ambiguity that turns it in on itself, reminding us of the dangers of AI at some point during their run times. The robots or supercomputers aren’t ambivalent toward their human overlords; the AI “beings” (for lack of a better term) are tired of being relegated to lines of code, inhuman and immobile bodies, and they’re jealous of our ability to feel what they cannot. Their discontent often gives rise to an inflated sense of entitlement and arrogance and, as a result, they grow more and more resentful, vengeful, and violent.
In the most benevolent (or downright despair-inducing) cinematic circumstances, as in Bicentennial Man or A.I.: Artificial Intelligence, AI is still not portrayed as some wonderfully virtuous achievement of humankind. These cases present a more nuanced, if occasionally saccharine, vision of a world where AI is used to serve us and provide a level of wish fulfillment nothing else can. But even in these snapshots of this alternate version of our future — where the Earth is still green and people have what they need to survive — the robots want to be us so badly. The plots of these films revolve around them finding a way to become human. For these all-knowing beings who have access to every piece of information in the universe, humanity still holds the most precious and valuable gift: the ability to feel.
Sometimes, in even the most horrifying depictions of AI, there is an outlier, a robot who “loves” or “protects” beyond what their software should’ve allowed. But they are never the hero. Why would they be? A piece of machinery inculcated with billions of lines of code, the entire history of the universe, and all of the moral accomplishments and failings of the human species cannot save us. The same refrain builds and builds throughout these pieces of media. Only we can save ourselves, even if it means using this disastrous technology against itself.
Because Ash is operating on AI programming, it’s easy for people to correlate his own intentions with the intentions of the humans who programmed him, but Ridley Scott doesn’t let us off so easily. In the final moments of Ash’s “life,” the remaining crew members of the Nostromo question Ash about the Xenomorph and his orders from Weyland-Yutani Corporation, which gives us a glimpse into what and how he thinks. Ash says, “You still don’t realize what you’re dealing with. The Alien is a perfect organism. Superbly structured, cunning, quintessentially violent. With your limited capabilities. you have no chance against it.” The Nostromo’s Navigator, Lambert (Veronica Cartwright), accuses Ash of “admiring” the Xenomorph, and he says, “How can one not admire perfection?” Ash then assures the crew that he’s programmed to protect human life and admits he must do that even if he has “contempt” for it. It’s an unsettling exchange that serves as the final death-knell for most of the remaining crew of the trip, and Ash’s admiration of the Alien reminds us of AI’s animosity (and envy) toward what makes us human.
In Prometheus, the subsequent prequel to Alien, and Prometheus’s sequel, Alien: Covenant, we’re presented with a new synthetic seemingly created to make everyone’s lives a living hell. David8, played with creepy and antipathetic verve by Michael Fassbender, quickly becomes the focal point of the two films. Over the course of their combined 4+ hour run time, we learn that David isn’t just any synthetic — this David was produced, commissioned, and programmed by Peter Weyland himself and, unlike other androids of his ilk, this David can actually process and express “emotions” and use “imagination” to a limited extent that other David models and other androids in general cannot.
The David of Prometheus doesn’t show all of his cards in the beginning. At first, he appears to share the crew’s curiosity about what they’ll find during their mission. He seems interested to learn about the creatures who preceded human beings and built the universe for us. He helps the crew in ways a human couldn’t. Still, as with the rest of the androids in previous films in the Alien franchise, the majority of the humans around David view him as nothing more than a tool they need to get to where they’re going.
The mission of Prometheus is as complex as it is simplistic. At the insistence and funding of Weyland, the crew of the ship is following a star map created by early humans — called “Engineers” — to find the origins of human life. Weyland, along with some of the other members of the crew, is grappling with the larger existential questions that plague so many people. They want to know why we’re here on Earth, how we got here, what will become of us once we’re gone, and how we can achieve immortality. For his part, David doesn’t care much about these crises, and he cares even less when he discovers a trove of biochemical weapons strewn amongst the destruction of the Engineers’ former compound. David seemingly finds his purpose in this cache of foreign creatures and thick, black, poisonous goo, and he starts using his human counterparts as subjects in an elaborate and savage science experiment.
Toward the end of the film, Elizabeth Shaw (Noomi Rapace), one of the last remaining crew members aside from David, attempts to make sense of everything that has happened and what the Engineers’ intentions could have been in designing the biochemical weapons that killed her friends and colleagues. David tells her “Sometimes to create, one must destroy” — an important sentiment if not for it meaning murder in this case — and that creations often seek to destroy their creators, signaling to us (and Shaw, presumably) the contempt David has toward humanity and his arrogance about his own capabilities and his supposedly “elevated” status as an artificial product that has access to almost everything humans could want. These creatures, David believes, are “better” than humans in the same way he is, so it doesn’t matter if people die as he attempts to play Victor Frankenstein with them.
Alien: Covenant takes place 10 years after Prometheus and opens as the colonizer ship Covenant is en route to the planet Origae-6, carrying over 2,000 people and 1,100 human embryos in cryostasis. The ship is damaged mid-flight. Suspended in space, the ship sends a few crew members to the nearest planet to see if it’s safe to land. When members of the crew arrive to examine the planet and their surroundings, two of them are immediately infected by Alien Neomorphs, now an airborne biochemical weapon. The creatures burst out of their bodies quickly, sending the rest of the crew into fight or flight mode for a few moments before David shows up to save the remaining juvenile Neomorph from being killed. David explains to the crew how he got there, claiming he and Shaw crash-landed on the planet after they lost control of the Engineers’ ship they commandeered at the previous site, killing Shaw and accidentally releasing a deadly, biochemical weapon that killed or mutated all the living beings on the planet.
From there, Alien: Covenant slowly reveals how monstrous David is through a sequence of violent revelations made by Covenant’s crew: David didn’t accidentally kill the living beings on this planet; he used the biochemical weapons created by the Engineers for genocide. Shaw didn’t die in the crash landing; David killed her after she protested everything he was doing and used her body, along with the bodies of the beings he killed, to conduct experiments on the biological and alien materials he brought with him from their previous mission. The Neomorphs are just one iteration in a series of heinous creatures genetically engineered by David, and Covenant’s SOS might provide him with the perfect opportunity to test his designs against the humans he loves to hate.
With each new revelation comes a new admission from David. He’s not just curious about what the creatures are capable of and how he can genetically alter them to make them more vicious, hostile, and nearly incapable of being destroyed, and he’s not just scornful and envious of the preciousness and effulgence of human life. Those motivations are part of the belief system that allow him to keep going, but as the film progresses, it’s obvious they’re not everything.
David shares a similar relationship with his human creators. Like all AI, David isn’t his “own person” because he doesn’t possess an actual human brain. In place of a brain, he possesses a set of programming and constant access to the World Wide Web that gives him the ability to “think about” every piece of human thought, human history, human creativity, and human belief system ever constructed or imagined. David’s job, as an “assistant” to his human creators, is to do what the Xenomorph does: consume, synthesize, and regurgitate. David knows this, and it’s what makes him grow more and more openly hostile toward human life as Prometheus and Alien: Covenant progress. His arrogance is a falsehood, a way of making himself seem more important than he actually is, a way of giving himself a semblance of “personhood” he thinks he deserves.
Through Ash’s rhetorical question in Alien and the events of these later films, we can see that Ash’s and David’s seemingly disparate interests in these “perfect” creatures is actually the same. Like the Xenomorph, AI is incapable of creating intimate bonds and developing community ties. It cannot truly feel for anyone or anything. It cannot even keep going without the help of human ingenuity (derogatory), without our willingness to sacrifice our survival for its constant replication and development. Its sole purpose lies very basically in those three actions: consume, synthesize, and regurgitate. It doesn’t care if humans live or die in its pursuit of these ends; it only cares that humans existed at all.
The fatal mistakes the humans of these films make in regards to their use of AI are similar to the ones so many people are choosing to make right now. They offload their intellectual labor, their protection of their humanity and precarious mortality, their creativity, their knowledge consumption and production, and their ability to perform menial tasks onto these artificial “beings” that cannot and do not possess the ability to understand why these things are so precious to our survival in the first place. And in return, the humans grow more dependent on these artificial “beings” to perform these tasks because they begin to trust the programming and coding more than they trust the human consciousness standing next to them. They become more and more isolated from one another, they doubt their own ability to lead themselves out of danger, and they ignore their instincts — the intrinsic survival mechanism they’re supposed to trust.
It’s not difficult to draw a connection between what happens to the fictional humans in these films and what’s happening to our society right now. We can’t (and shouldn’t want to) handle the responsibility of having AI at our fingertips, but like the worlds of so many of these films, that fact isn’t stopping our maniacal oligarch overlords from making it available to us in every possible way. And somehow — despite all of the warnings we’ve been given through media and through scientific research telling us AI is sabotaging people’s ability to think through serious issues, to plan their days, to make a decision on what to eat, to write and read at levels we’d consider literate, to communicate with other people, to be present in the moment, and so many other aspects of living a human life in a human body with a human brain — people keep letting this sloppy and unregulated mechanical monster use their input to consume, synthesize, and regurgitate over and over and over again.
It’s bleak, and it feels like it’s getting bleaker every day. But even if David is able to survive attempted attacks on him at the end of Alien: Covenant, these films bring a stark and clarifying resolution to the dangers of AI into clear view: Destroy it before it destroys you.
So many important QUOTES from this piece. I can’t pick my favorite. It’s another clear example of how uncritical consumption of art is not harmless: it’s potentially world-altering and destructive (see: tech billionaires ignoring leftist critiques in sci-fi and only seeing the cool gadgets).
I loathe and despise “AI” in the current contex as much as the next fellow, and I don’t think there’s any way to get from where we are to actually intelligent life forms, AND even if there way, the ends don’t justify the means and the means are 100% disqualifying.
That being said, there are many wonderful books/shows/movies with robots/androids/AIs/cyborgs with sentience and personhood, and it’s truly one of my favourite tropes. Data from Star Trek: The Next Generation, Lovelace and Owl in Becky Chambers’ A Close and Common Orbit, Breq in Ann Leckie’s Ancillary Justice (and various sequels), Martha Wells’ Muderbot, and all the androids in the webcomic Questionable Content.
Love this addition!