When Fiction Can't Keep Up ✍️
As someone studying machine learning while also doing theater, I find myself thinking a lot about how Hollywood portrays artificial intelligence. A New Yorker article by Inkoo Kang got me reflecting on something I've noticed: TV shows often default to negative portrayals of AI, and while I haven't watched most of these specific shows myself, the pattern raises interesting questions about how we think about technology and creativity.
Hollywood's Negative Lens 😒
The article discusses several shows where AI appears as a threat or problem. "The Morning Show" features buggy translation tools and privacy-invading chatbots. "Black Mirror," which used to be known for thought-provoking tech commentary, apparently now treats AI more as a plot device than as something to seriously examine. Even shows trying to be sympathetic, like "Murderbot," focus on making AI relatable rather than exploring deeper implications.
This tendency toward negative portrayals isn't necessarily bad or good. It's just worth noting. On one hand, there are real concerns about AI that deserve exploration. On the other hand, when nearly every depiction skews negative, we might be missing part of the picture.
The Ethical Questions in Theater and AI 🫨
What interests me more than whether AI is portrayed positively or negatively is the ethical complexity that seems to be missing from these shows. As both an ML student and someone involved in theater, I see these questions playing out in real time.
In theater, we're dealing with fundamental issues about human presence and authenticity. Theater has always been about live performance, about the unrepeatable moment between actor and audience. When AI enters this space, it raises questions that go beyond "is this good or bad?"
For instance, the article mentions that the daughters of Robin Williams and Martin Luther King Jr. have had to ask people to stop creating deepfakes of their fathers. This touches on consent, ownership of one's image, and what happens to someone's likeness after they die. These aren't simple good-versus-evil questions. They're genuinely complex ethical territory.
From the ML side, I'm learning how these systems actually work, which makes the ethical questions more concrete. When you understand that large language models are trained on massive datasets, often without clear consent from creators, you start asking different questions. It's not "is AI bad?" but rather "who benefits from this technology, and who bears the costs?"
What's Missing From the Conversation 🧐
The article points out that AI is already affecting workers in creative fields like animation, costume design, and special effects. During the Hollywood strikes, AI was a major issue because workers could see how it might be used to replace them.
This is where the ethics get complicated in ways that simple negative portrayals don't capture. AI tools can genuinely help artists work more efficiently. They can enable new forms of creativity. But they also threaten livelihoods and raise questions about what we value in art.
In theater specifically, there's a question about what makes performance meaningful. If an AI could perfectly replicate a human performance, would it matter? Most people in theater would say yes, it matters deeply, but articulating why is harder than it seems. It has something to do with presence, vulnerability, the knowledge that what you're witnessing is happening in real time with real risk.
The Corporate Dimension 😠
One show the article highlights as getting closer to current anxieties is "Alien: Earth," which apparently depicts a future where corporations have replaced democratic governance. The article notes that the show's portrayal of "internecine battles between callous, self-involved plutocrats" doesn't feel far from our current situation.
This corporate angle is crucial to the ethics of AI in creative fields. The technology itself isn't inherently good or bad, but the way it's being deployed often serves corporate interests over worker interests. The article mentions that some AI firms predict eliminating half of all entry-level jobs by 2030, while top researchers command nine-figure salaries.
For someone learning ML while also working in a creative field, this creates real tension. The technology is intellectually fascinating. The potential applications are genuinely interesting. But the actual implementation often seems designed to concentrate power and wealth rather than to help working artists.
Beyond Good and Bad 😇👿
This is why I think the tendency toward negative portrayals in Hollywood is worth examining, even if it's not inherently wrong. The real ethical questions aren't "is AI good or bad?" They're more like:
How do we ensure that artists maintain ownership over their work and likeness?
What happens to consent when AI can generate convincing imitations of real people?
How do we balance technological efficiency with the value of human labor?
What makes live performance meaningful in an age of perfect simulation?
Who gets to decide how this technology is used, and who benefits from those decisions?
These questions don't have clear answers, and they probably shouldn't be portrayed as having clear answers. The article suggests that shows like "Black Mirror" have lost their edge partly because they've moved away from exploring these ambiguities.
Reality Outpacing Fiction 🫥
What struck me most about the article is the observation that reality has become stranger than most TV writers dare to imagine. The article mentions chatbots affecting people's mental health and AI-generated content making it harder to trust what we see and hear. These aren't future scenarios. They're happening now.
As someone studying how these systems work, I know that current AI is "the worst the technology will ever be." It will keep improving. And as someone in theater, I know that once audiences lose the ability to trust their own perception, something fundamental changes about how we experience art and truth. I also know as a student surrounded by people who hate and love AI, this content is everywhere.
This doesn't mean AI is bad. It means we need better frameworks for thinking about it than simple good-versus-evil narratives. We need to grapple with the messy reality that the same technology can enable new forms of creativity while threatening existing creative workers. That it can democratize access to certain tools while concentrating power in the hands of a few large companies.
What We Actually Need 🤗
The article argues that Hollywood needs to "confront and compete with" the reality of AI if it wants to help make sense of what's coming. I think that's right, but I'd add that we need portrayals that embrace ethical complexity rather than defaulting to either techno-utopianism or dystopia.
We need stories that show people genuinely wrestling with these questions. Not villains trying to replace all humans with robots, and not heroes stopping AI from taking over the world. Just people trying to figure out how to use powerful tools responsibly, how to protect their livelihoods and dignity, how to maintain what's meaningful about human creativity in a changing landscape.
As someone positioned between the technical and artistic sides of this conversation, I see how much nuance gets lost when we reduce AI to a threat or a savior. The ethical questions are harder and more interesting than that.