TENET ENDING EXPLAINED
Chapter Two, Part Two
It is far more likely that the next generation with their perfected knowledge will find the work of their predecessors bad, and tear down what has been built so as to begin anew.
'The City Coat of Arms'
Franz Kafka.How soon after we began creating art did we begin to sense its inadequacy? It hardly matters what purpose you assign to its creation. Not only is the art we can make not as good at its job as the art we can imagine, even the art we can imagine probably isn’t good enough.
The story of “the city and the tower,” from the Book of Genesis – which evolved into the story of the Tower of Babel – is probably 4,000 years old. In it, to “make a name for ourselves,” all humanity unites behind the creation of a single work of art. God, fearing a creative rival, makes us all speak different languages, so that further collaborations at this scale will be impossible. So, it’s not even really the story of a tower. It’s the origin story for linguistic differences – a story which necessitated the imagination of a creative project worthy of God’s intervention. A very tall building is the best idea the storyteller could come up with. (In the bible story, the tower wasn’t even intended to literally reach heaven.)
We often turn to size when we try and imagine “art but moreso.” Vast unions of people, oceans of time, planetary resources. But put to what purpose? To create a project somewhat like the ones we already have, but wayyy bigger. You could call this a failure of imagination.
So imagine an artwork that was “moreso,” but not expanded across space or time. Instead, imagine a film of normal running length, with a reasonable budget, but which somehow had a greater density than any movie that had come before it.
The possibility of a mortal writer-director creating a 120-minute Tower of Babel might seem outlandish to our modern perspective. But you might receive a more credulous response if you could travel back through time.
All the way back… to the year 2000.
For the decade of the 2000s saw the rise of storytelling that valued plot mechanics of threshold-crossing complexity. This complexity relied on greater computational power being brought to bear on the screenplay. But this extra power did not require extra resources. It was, in a sense, “free.” Simply the result of new, more efficient screenwriting techniques. A filmmaking movement emerged that believed that this newly available complexity granted access to ideas that had been unattainable via previous methods. But the distinguishing features of this movement were so exhausting to describe that I don’t believe it was ever properly recognized.
Over the decade of the 2010s, esteem for this movement fell dramatically. Not coincidentally, the 2010s coined a term of affectionate approbation that is perfect to finally give this movement a name.
IV. The Dawn of “Galaxy-Brained” Cinema
The second century of cinema began with a new millennium on the horizon,
The “founding myth” of the first century of cinema is the 1895 screening of the Lumiere brothers’ Train Arriving at La Ciotat Station. So the story goes, the primitive audience of that film panicked, believing that the train barreling toward the camera was real, and could escape the screen and crash into the theater. This panic likely never actually occurred. But the story the myth tells is of a loss of innocence. A thrill that could only be experienced once, before our expectations caught up with the latest technological advancements.
If the art form’s second century wants its own founding myth, I would suggest the twist ending of M. Night Shyamalan’s The Sixth Sense, in 1999, with its analogous unrepeatable thrill.
There had been mind-blowing twist endings before The Sixth Sense. Lists of the greatest twist endings of all time would probably establish 1995’s The Usual Suspects as its most recent precursor. That film centered on the pursuit of the mysterious criminal mastermind Keyser Söze. The twist ending revealed that the seemingly harmless narrator of the film’s events was in fact Söze himself, and that the film’s entire plot, everything that we had just watched, might never have actually happened.
This twist ending had an air of finality. The culprit of the whodunnit was the narrator himself. Astonishing! But, where could you go from there? The last remaining original resolution to a mystery had been used up. The magician will now disappear into his own hat.
But The Sixth Sense presented a dazzling renewal. It not only contained a bravura twist ending, but it suggested that there were in fact limitless twist endings left to be uncovered.
S-P-O-I-L-E-R, the movie presents Bruce Willis as Dr. Malcolm Crowe, a child psychologist. In the prologue, Malcolm Crowe is attacked and shot in his home by a former patient who is cursed with the ability see ghosts, and who blames Crowe for misdiagnosing him as having a psychiatric disorder. The ensuing trauma derails Crowe’s marriage. But he gets a chance at redemption with a new patient, Cole, a boy who also claims to be able to see ghosts. After he helps the boy find peace, we’re hit with the twist. Crowe, the protagonist of this ghost-story, realizes that he is a ghost himself.
This ending was not entirely original. Ambrose Bierce’s “An Occurrence at Owl Creek Bridge” ended with the revelation that the entire preceding story had been a fantasy in the last moments before the protagonist’s death. However, that twist, much like The Usual Suspects’, undermines the reality of the preceding story. They’re both variants of the “it was all a dream” twist ending.
The Sixth Sense is unique because its twist doesn’t degrade the storytelling that led up to it. It was all a dream, and yet it was all real. Up through the final twist, the film tells two stories. One is a drama about a child psychologist redeeming a traumatic failure. The other is about a ghost achieving closure. It simply tells these two stories using identical scenes. A remarkable trick!
Before The Sixth Sense, it wasn’t that obvious that two movies could coexist on the same strip of celluloid in this way. To pull that off requires extra work during the screenwriting stage. Very specific things need to happen. Dialog will need to be spoken using very specific words. How many such contrivances must be engineered? How many miracles? You might guess that the number approaches infinity.
However, the number actually turns out to be quite manageable. Here’s how M. Night Shyamalan did it.
Throughout The Sixth Sense, there are several scenes where the ghost Malcolm Crowe is seen beside other characters who would be able to perceive him, if he were alive. They never speak to him, and Cole never speaks to them in their presence. This absence is well-disguised in a variety of ways. For example, since Crowe is a psychiatrist, it seems natural for him to observe a conversation between Cole and his mother, without comment.
But the centerpiece of these scenes is more audacious. Crowe dines at a fancy restaurant with his wife, Anna. On our first watch, we still believe Crowe is alive, as does Crowe himself. He enters the scene apologizing, having forgotten that tonight is their anniversary. It appears that she is angrily refusing to speak to him. After his rambling apology, Anna wishes him a hollow “happy anniversary,” which we take to be an unforgiving admonishment.
But when we revisit this scene with the knowledge that Crowe is a ghost, it becomes clear that she’s not ignoring him. She actually can’t see him. She’s booked a table for one to memorialize her husband’s death, and when she speaks, it is only to wallow in her own grief.
That’s about all there is to it. As always, once the mechanics of the trick are laid bare, it seems less magical. And because the magic happens at the level of story, it’s easy to miss that what allows The Sixth Sense to work is essentially a technical innovation, not all that different from the one that allowed the Lumiere brothers to shock audiences. The public simply didn’t yet know that mankind had achieved that level of practical capability.
There followed a rush to explore and exploit this new storytelling mechanic – the “series of artful contrivances,” and a new movement was born. As it developed, a few other habits emerged among the movement’s primary practitioners that are also essential to understanding its character.
V. The Rise of Galaxy-Brained Cinema
Shyamalan, Christopher Nolan, and Rian Johnson were three of the most acclaimed new writer-directors of the 2000s. Their work across the decade defines this movement; it’s all one needs to get a sense of the defining characteristics and fixations of what I’ll call “Galaxy-Brained filmmaking.”
Structure. Many films from this body of work center around extremely novel ideas that require an extra layer of contrivances to wrestle into a story. Many of these concepts can be roughly defined as The Sixth Sense can, as “two films occupying the same sequence of scenes.” There are the two genres – film noir and high school drama – of Brick. The simultaneous forwards and backwards-unspooling mysteries of Memento. The cons of The Brothers Bloom that are experienced as psychological breakthroughs by their marks.
Character. These films’ approach to character development is necessarily constrained. When you must devise several scenes in your film that successfully serve two purposes – for example, when you have to imagine a dinner between two characters that makes sense whether or not one of the characters can be perceived by the other – then the behavior in these scenes can’t be primarily motivated by character. Character will usually have to be reverse-engineered from the precise function the scene must achieve.
The engineering can’t result in completely inexplicable behavior. Clearing that bar is what makes the contrivance “artful.” But often these films are forced to settle for the explicable, forgoing authenticity.
This is not to say that these directors were incapable of portraying authentic behavior. And, while their plots may require a series of artful contrivances to function, there will still be many scenes in between these contrivances that are not load-bearing, and so are free to be as human as you can make them. The Sixth Sense contains many moving scenes between Cole and Crowe, and especially between Cole and his mom.
But it’s that dinner between Crowe and his wife that is tasked with sustaining the plot twist. And so their relationship must be crafted around this scene.
Shyamalan takes a shockingly direct route. In the opening scene of the film, we meet Crowe and his wife before he is killed. This scene includes exposition establishing that (1) Anna is upset that Crowe neglects her for his work. And (2) When Anna is upset with Crowe she “does the quiet thing” — gives Crowe the silent treatment. To clarify, this is a movie about a child psychologist that Shyamalan wants to hide from the audience is a ghost. So Shyamalan constructs a character who explicitly reacts to her husband… doing his job as a child psychologist… by treating him like a ghost. The needs of the plot have colonized her mind in the most literal way imaginable.
The comedian Nate Barghatze tells a relevant joke about this aspect of The Sixth Sense. He reminisces about 1999 audiences getting caught off guard by the twist ending, “the biggest surprise we’d ever seen in our lives.” The surprise was possible because the viewer thought they were watching “a movie about marriage and how hard marriage is.” They related to the movie on those terms because they immediately related to the concept of women giving a man the silent treatment as punishment for an inscrutable offense. ("Even if you get shot, it’s your fault,” ha ha.)
Though Barghatze presents audiences as coming by this knowledge of female behavior through experience, the joke within the joke is that the “silent treatment” is more familiar as an artificial construct encountered in comedy. Galaxy-brained filmmaking is especially ruthless in defining a “character” as nothing more “a system of behavioral rules that an audience member can make sense of.” Perhaps the audience is able to make sense of these rules because they are genuinely recognizable as those guiding human behavior. But it’s acceptable if the rules are only familiar from pervasive cliches. Later films test the limits even further, presenting characters whose behavior isn’t familiar at all, but setting aside significant space to explain the rules thoroughly enough that they can be grasped by an audience.
Subject matter. No topics seemed to interest these directors other than the archetypal genres of the past. Their return to these genres isn’t traditionally “revisionist.” They don’t revisit with the dawning moral or political awareness of their time. Nor do they add anything personal to their work. The auteurs of the 90s often seemed to refer to earlier movies, in part as if to say, “I am defined by my love for these films.” The Galaxy-brained filmmakers of the 2000s refer to earlier films, but not only is there is no “love,” there is no “I.”
All they bring with them to these old familiar tales of private eyes, time travelers, con men, and flying saucers are updated plot mechanics. The new techniques in all their dazzling complexity. The result is bracingly anti-nostalgic. All of these genres must be revisited, not out of reverence, but because the ape cinema of the 20th century was not sophisticated enough to properly exploit them.
Theme. You might expect that a cinema that’s fixated on impersonally updating old genres with new, more powerful hardware wouldn’t have any consistent thematic concerns. Yet one did emerge.
Again, Shyamalan provided the turning point. Unbreakable, his 2000 followup to The Sixth Sense, tells the origin story of a superhero. The hook is that Unbreakable would be the “the first truly grounded comic book movie,” which is to say, it isn’t set in the heightened reality of comic books, but a reality more like our own. The movie eschews many of the expected pleasures of the traditional superhero film. David Dunn, who discovers he has superhuman invulnerability after he’s the sole survivor of a train crash, doesn’t fight supervillains, and doesn’t adopt a superhero name. Unbreakable’s sole contrivance comes in the last seconds, when Dunn’s accomplice, a comic book obsessive who coaxes Dunn into interpreting his abilities as those of a superhero, reveals that he caused the train crash himself, and that he views himself as a real-life supervillain, one that might appear as the antagonist in a comic book story.
What is a story? One might answer that it’s a narrative centered around the transformation of a person. Yet, real life also has people; people who transform. Why then does real life feel like it’s emphatically not a story? This question became a fixation. But this movement did not take for granted that the separation between audience and narrative was a fixed aspect of reality. Rather, it saw that barrier as one that could be overcome, with Galaxy-brained advancements in storytelling technology seeming to bring the solution in tantalizing reach.
This obsession manifested across the movement in several ways. A frustration with the artifice of the traditional story is a recurring element in these narratives, which more than once revolved around artists driven to completely transcend the limits of their art form. The 19th century magicians of Nolan’s The Prestige, for example, find typical Victorian stage magic to be hollow, and seek to develop truly unsolvable tricks. To stay ahead of their audience, they turn to advancements in technology, and make unnatural personal sacrifices.
Similarly, the con artists of The Brothers Bloom, who consider themselves to be storytellers, have also transcended their art form. Their cons do not end with the mark realizing, too late, that they’d been tricked. Rather, their cons are totalizing experiences in which the mark, despite having parted with a sizable amount of money, nonetheless returns happily to their lives, believing all of the events that have happened to them were real – that they have lived through a story.
Even more striking: the strange plot device central to The Brothers Bloom - a large team of operatives constructs an alternate reality, with the aim of gently conning a billionaire out of some of their fortune, all by convincing them that they’ve experienced the highs and lows and catharsis of a bonafide adventure – show up at least two more times. First, in David Fincher’s The Game, and then again in Inception.
This repetition adds some color to the Galaxy-brained fixation on the membrane between storybook reality and our own. Plainly, there’s a supposition that living through a story would be desirable. A certain type of story, at least – one in which you are the main character, and in which reality is shaped around both your strengths and your weaknesses, shaped ultimately so that your strengths triumph and your weaknesses are vanquished. However, this story is usually shaped around a fictional character by an author, and the two are separated by an impenetrable veil.
If we were to imagine how to overcome this barrier, we might first imagine that it would take a lot of money. And, going back to Westworld, the 1973 film that Christopher Nolan’s brother would go on to adapt, we see a familiar trope: people paying money to live the life of a fictional character, in this case with a supporting cast of robots. The problem is that fictional characters don’t know that they’re in a story. That aspect of it turns out to be crucial, which means the theme park of Westworld is at best a partial solution. In the “sucker billionaire” story model, the billionaire pays for the theme park without realizing it. It’s a proof of concept, demonstrating that it is theoretically possible for a human being to become a protagonist, living alongside their own authors.
Still, there’s only so many ways of telling the story of a sucker billionaire. Shyamalan’s innovation – taking a genre but making it “grounded” – proved the most repeatable of the techniques the movement developed to make a “real-life story” feel possible. And what he started with Unbreakable, Nolan was able to top.
VI. Peak Galaxy-Brained Cinema
In 2003, Warner Brothers hired Christopher Nolan to direct a movie about Batman. At the time, the re-exploitation of a fictional character that had appeared in a film in recent memory still felt outré. But the 80s and 90s had seen four Batman films, each perceived as more desperate and gauche than the last. And so while Batman Begins, which “re-booted” the franchise, was the movement’s greatest slap in the face to 20th century cinema, it was a widely welcomed rebuke.
It’s hard to overstate the acclaim that was showered upon Nolan’s Batman Begins and its sequels. Even the marketing was unable to say anything that a member of the public would be unlikely to second. When the eventual third film was completed, Warner Brothers released a triumphant feature-length DVD extra titled “Behind The Scenes of The Dark Knight Trilogy” that, despite being pure hype, accurately captures the adulation that greeted the films.
“In the beginning of the 21st century,” begins the introductory talking head, “audiences had grown too sophisticated” for unserious takes on superheroes. Batman Begins finally delivered for these post-millennial movie goers. Instead of qualifying their enjoyment by saying it was merely “a great super hero movie” they could now, for the first time ever, say that a great superhero movie was also “a great film.”
Nolan himself appears in the film to clarify his creative intent, focusing on “a gap” in Batman’s origin story that hadn’t been “addressed” in either the comics or in earlier Batman films. He expands to clarify what is clearly the founding vision for his take on the superhero. Create a “cinematic reality” that “gives the world of the story and the characters the same validity as they would if your source material were not a comic book.”
Batman Begins would top Unbreakable, the original “grounded comic book movie,” in two ways. Shyamalan’s film had conceded that a superhero’s origin would have to be supernatural. Batman Begins demonstrated that an origin story could emerge even under the rules of logic that govern our reality. Unbreakable felt it honest to acknowledge that if a superhero were magically manifested into our reality, their life would be banal – they’d have to busy themselves with low-level criminals who were not worthy adversaries. Batman Begins aimed to prove that a comic book story could be grounded without sacrificing any of the bells and whistles that comic book fans feel entitled to.
How does Nolan engineer a comic book story that is simultaneously real? Through the only storytelling tool that can accomplish this task. A series of artful contrivances.
To begin, he pre-programs the demands of his plot into his into his main character’s psyche. In this case, he gives Bruce Wayne a formative childhood experience that leaves him afraid of bats. Thus, after numerous people he meets teach him that he will need to become more than a man, but “an idea” or “a symbol” if he wants the criminals of Gotham City to fear him, he can simply act on that programming, and adopt the bat as his symbol.
Our main character is also a billionaire, which always makes things easier. Obviously, Bruce Wayne is canonically a billionaire. But the 2005 Bruce is different than his predecessors, in ways that resemble the “billionaire marks” of The Brothers Bloom and Inception. Traditionally, Bruce Wayne decides to become Batman after his parents are murdered, and spends his vast fortune crafting the paraphernalia that a Batman will need. Cause and effect are reversed in Batman Begins – the money acts first. As Nolan has said, he did not originally believe that the Batmobile could be worked into a “grounded” superhero story. But the Wayne fortune allows contrivances that shorten the psychological distances Bruce must travel. As Bruce is toying with the decision to become a vigilante crime-fighter, he discovers that among his holdings is a weapons manufacturing company that has already developed the prototypes for equipment that resemble the Batmobile and Batarangs we’re familiar with. There is even a shapeshifting substance that can take any form. Everything effortlessly takes the shape that Bruce’s psychology imprints on it. All he has to do is have his gear painted black.
The coda of the movie is a true diamond, the most perfect expression of the movement’s fixation with bringing fiction into reality. Batman Begins is a relatively restrained affair for Batman, in which he faces off against mobsters and terrorists with only a marginal flair for the dramatic. But during the epilogue, Commissioner Gordon tells Bruce that his creation of a Batman persona has prompted “escalation;” has “changed things.” Criminals are now adopting their own personas. A bank robber has begun referring to himself as The Joker.
Obviously, Nolan would like to expand the franchise to include more of the rogues gallery familiar to Batman fans. Yet he must also maintain his franchise’s unique sense of groundedness. The assertion that Batman’s villains will manifest as the inevitable result of Bruce Wayne’s rational-in-context decision to “become a symbol,” is, toward this purpose, a blank check.
The implication, though perhaps not consciously worked through, is earth-shaking. Given infinite time, our reality will eventually create the circumstances necessary to motivate a rational person to become a comic book superhero. From that point, the contagion will spread, and our reality will be fundamentally altered. Comic book reality, storybook reality, is thus inevitable. The transformation might not come in our lifetime, but we can be soothed that our reality is just the precursor to theirs.
The Dark Knight, the sequel film, pays off this promise off by portraying the Joker as, essentially, a fictional character who has entered the real world. While the Joker is often presented as a guy who fell into a vat of acid that turned his skin and hair into clown colors, Nolan’s Joker ridicules the concept of origin stories, and does not have a literal origin of his own. He’s a person who has been infected by an idea. (Other characters in the story are infected with the same idea by the Joker’s contagious charisma). Or perhaps he’s an idea that took human form, a living embodiment of the conclusion to a logical proof.
The classic, Aristotelian purpose of storytelling is to excavate universal truths by telling purely iconic stories that captures the essence of reality in a way that life, in its specificity, never can. Contrivance-based storytelling moves in the opposite direction. A series of disguised coincidences are employed to make unlikely events seem plausible. Why do this, unless you had a story-telling idea so profound that merely demonstrating that it could happen would change the way humans perceived reality? If you could demonstrate that our reality would eventually produce the Joker, in the flesh, could you convince millions of moviegoers to fear L’arrivee d’un clown, even after they’d left the theater?
“Mass hysteria” may be too strong a term, but fear of the Joker took hold to a not-insignificant degree. It relied on a series of tragedies – the death of Heath Ledger, and a mass shooting at a movie theater playing the third Batman film, falsely reimagined as wearing Joker makeup. But that alone does not explain why, in 2019, when the Joker was given his own contrived origin film, the film was treated by critics as if it were actually dangerous. Screenings were attended by police officers, and cinemas changed their policy to ban cosplayers from wearing Joker makeup at screenings.
The barrier between reality and fiction may still be intact, but Nolan’s fastball left a pretty big dent. Yet the techniques he devised would soon be co-opted, in a way that spelled the end of his movement.
VI. The Doom of Galaxy-Brained Cinema
In “The City Coat of Arms,” Franz Kafka’s parable about the Tower of Babel, the tower is an irresistible concept that dominates the imagination of generations, while remaining an abstraction. “The idea, once seized in its magnitude, can never vanish again,” he wrote. But as his parable continues, it becomes clear that the idea in question is not the Tower itself, but the “necessary unity” needed to bring the Tower into existence.
In 2008, Marvel Studios released Iron Man. That film ends like Batman Begins does, with a teaser promising more. In this case, more was much, much more. A cinematic universe! The idea, once seized in its magnitude, could never vanish. The Tower had cast its spell on a new generation.
The Marvel Cinematic Universe was launched with three interconnected feature films, each centering a character from Marvel Comics that had not yet been exploited – Iron Man, Captain America, and Thor. These were secondary characters, and their specific eccentricities had not been digested into the culture with the thoroughness of Batman, Superman, or Spider-Man. It might have seemed risky to ask audiences to accept a reality in which these outlandish and unfamiliar characters made sense. But Batman Begins had demonstrated that you could have comic book characters without comic book reality. You could slowly coax them into a reality more like our own, through a series of small contrivances.
Early films in the MCU repeated many of the techniques Nolan had employed. Origin stories had their gaps plugged. Captain America had once been a symbol of patriotism created in a lab to fight Hitler. Now he was created as a mascot to sell war bonds, who, through a series of coincidences, ended up in direct confrontation with the Nazis. Nolan had reimagined the Batman Villain as R’as Al Ghul as a (Keyser Soze-esque) fictional persona devised to add a bit of exotic oomph to an otherwise quotidian criminal organization. The MCU repeated this maneuver with Iron Man’s nemesis The Mandarin. Nolan conceived of each film in his Batman trilogy as a comic book movie hybridized with a unique second genre. Marvel made this a systematic practice, widely promoting the idea that films featuring Captain America doubled as a political thrillers, while films featuring Thor doubled as high fantasy.
Nolan’s Batman trilogy hit a point where comic book reality became “locked in.” It no longer needed to explain the existence of comic book heroes and villains in the real world, because the presence of one justified the presence of all. The MCU hit the same turning point. After The Avengers teamed up the protagonists of its first three films in a climactic “Battle of New York,” subsequent films dealt with an ordinary world coming to terms with their new comic book reality, often resulting in regular people being inspired to imitate the first generation of superheroes and villains. With the transformation effected, the MCU then needed a new reason to exist. And by that point, it had already developed one – the pursuit of size for its own sake.
The Avengers concluded “Phase One” of the Marvel Cinematic Universe. Suddenly, a new unit of storytelling had been invented! And a new type of storyteller: the overseer of a universe, a position of such altitude it has not yet even been given a name. Disney purchased Marvel Studios, and the seemingly impregnable licensing agreements that had long sequestered Marvel’s most iconic characters in the outputs of competing studios began melting away. It was hard not to be awed at this groundbreaking ambition and power.
There was opposition. Marvel movies were soon perceived as crowding other movies out. You could decry this. Martin Scorsese became a prominent critic, worrying that Marvel had become the “primary choice” in movie theaters, displacing films that were “the unifying vision of an individual artist.” But at that point, Marvel movies were so central to the culture that any opposition would come to define you. In this way, you became part of the structure of the Tower despite yourself, a grinning gargoyle on the windowsill.
This kind of project requires the collaboration of millions of people. Not just to buy tickets, but to keep the project always at the center of the public conversation. MCU enthusiasts would defend the ubiquity of superheroes by comparing them to the ancient pantheons of the Gods. When Scorsese defended the value of individual artists, the reply was as inevitable as it was dispiriting. What was so special about the individual? MCU films may follow a formula, but weren’t Scorsese’s equally homogenous? These weren’t arguments on behalf of the MCU films’ quality, but justification for their dominance. The MCU achieved its size because of a broadly shared understanding that dominance had become the point.
The perfect indicator of where this was heading was the suggestion, shyly dropped here and there, that Marvel tell stories about the ordinary citizens of the MCU. What is it like to be a regular person in a superhero world? You can sense the underlying wish for the MCU to become a real universe, fully-populated, with a story for each one of its trillions of residents. We arrived at yet another manifestation of that egoist desire to “live in a story.” While Nolan suggested that, given infinite time, a storybook reality will eventually supplant our own, the MCU stoked dreams of the inverse: a universe that takes up infinite space, guaranteeing that you can find your own doppelgänger inside it.
Over the long run, the foundation did not hold. In Kafka’s telling, as one generation realizes that the Tower will not be completed in their lifetime, they become distracted by a more immediate task: improving the living quarters of the Tower's workforce. This fixation overtakes the primary work. Different nations vie to have the best accommodations for their workers. The MCU, similarly, took on the the obligation of celebrating all of the world’s cultures, and the task of apportioning space in its universe to the satisfaction of all. (After seventeen consecutive films about white protagonists, this probably should have been considered a penance. But because the MCU was becoming the world, racial and gender breakthroughs within its retrograde reality were celebrated as if they were milestones for humanity at large.)
Then, in 2019, the overarching story the MCU had been telling for a decade came to a satisfying conclusion. In quick succession, a global catastrophe shut down movie theaters for several years. But the MCU kept expanding, adding a streaming channel to broadcast in-universe TV shows that were quickly understood to be “homework.” The films, which had once received overwhelming praise, declined in quality, and began to get bad reviews. Audiences turned on the MCU.
The spell broke. But in the public discourse, the pendulum swung so vehemently in the other direction that Galaxy-brained filmmaking, the precursor movement to the MCU, could not benefit. Plot was out. Sensualism was in. Color! Motion! The new term of art was “sheen” – a way of poetically reducing a movie’s identity to the way its light felt on your eyes. Nostalgia was making a comeback. As a way of rejecting the digital unreality of the MCU, critics rediscovered the old-school action movie, becoming unusually invested in championing an Academy Award for stunt performers. We were getting out of our heads, and back in touch with our bodies.
And we were starved for sex!
The critic RS Benedict’s excellent “Everyone Is Beautiful and No One Is Horny,” from 2021, is the defining essay of this moment of cinematic hedonism. In the piece, she bleakly surveys a degraded film landscape, dominated by superheroes with immaculate bodies who never seem to fuck. (Film critics say “fuck” in this new era. That’s how much they hate superheroes.) The piece is ostensibly an attack on the Marvel Cinematic Universe, but it also takes a shot at Christopher Nolan, whose “inexplicably sexless oeuvre” is used as a stand-in for everything that’s gone wrong with 21st century cinema. In fact, he’s the only director referred to by name.
V. Galaxy-Brained Cinema Ending Explained
It makes sense to categorize the work of Christopher Nolan with the output belonging to the Marvel Cinematic Universe. Both bodies of work are miracles of engineering, because that is what they aspire to be. Representing recognizable human behavior is not a priority of either.
However, there’s a crucial distinction to be drawn between the work of the Galaxy-Brained filmmakers and the work of Disney’s Marvel Studios. Christopher Nolan, M. Night Shyamalan, and Rian Johnson are individual people. Their grandest projects are bounded by that limitation. They may have to work harder than most on their screenplays. A recent profile describes Rian Johnson’s harrowing experience writing his latest film, Wake Up, Dead Man. He “couldn’t see his way out of the maze.” That quote irresistibly calls to mind another, from Nolan’s Inception. Elliot Page’s Ariadne is given a simple pen-and-paper test to see if she’s worthy of joining an elite team of literal dream-weavers. “You have two minutes to design a maze that takes {at least} one minute to solve.” That’s all the individual author has to give. Minutes. The best can perhaps improve the ratio of time spent in creation versus consumption, but the ratio will always be unfavorable.
Projects like the Marvel Cinematic Universe make a similar offer: art that is somehow “more.” But they demand a different scale of resources. The spell of one particular Tower has broken. But the idea of handing over responsibility for the creation of most of the planet’s art to an artificial intelligence that consumes most of the planet’s electricity, entrancing to many, is gathering force: yet another manifestation of the same desires. One often-heard defense of computer-generated art is that most movies made by individuals suck, anyway.
When Nolan’s Tenet was released in 2020, it was, inevitably, greeted by a host of articles titled “Tenet Ending Explained.” Explaining the plot of every movie was a job that most entertainment publications had taken on. However, none of these “Ending Explaineds” fully unraveled the film’s plot. There are a few reasons for this. Most obviously, because the film found a way to out-do even its Galaxy-Brained predecessors in terms of complexity, and would take an unreasonable amount of time to fully diagram. (If we accept Nolan’s 2:1 maze construction-to-solution ratio, and his account of the length of time he spent developing the script, this implies that the truly complete Tenet Ending Explained would be the work of years.)1
Secondly, in many of those original attempts, you saw that their authors were a bit embarrassed. There was a sense that Nolan’s methods had been discredited, and that works of pure engineering were not worthy of the obsessive attention it would require to disentangle them. “The idea of doing homework to explain a movie might not sound exciting,” was a common disclaimer. This “homework” could only be justified because it allowed you to appreciate the film’s other qualities. “Once you know what’s going on, there’s a lot to like in Tenet.”
And thirdly, the unstated purpose of the Ending Explained is to solve all the riddles of a maze. Once it’s discovered that the maze itself has no solution, because it is faultily constructed, the work of the Ending Explained stops. But Tenet is not only exceedingly ambitious, it is an engineering failure. Nolan tries to hide the cracks, and even goes so far as to play into the current anti-homework bias in order to discourage anyone from searching for them: a character speaks the lines “Don’t try to understand it. Feel it.”
It’s easier to defend Tenet by suggesting that, at the heart of all its clattering mechanics, there is a beating heart. Some human feeling. But that’s to shy away from its true essence. The most valuable thing about Tenet is its plot. That’s also what makes it the work of an individual. Tenet employs techniques that Nolan mastered, and which today are wildly out of style. It is the culminating work of the movement he defined. It calls back to the beginning of his career, and then back further, to the dawn of cinema, attempting to answer the fundamental question raised by the invention of the motion picture camera: “given that a film reel may be played through the same mechanism be played backwards or forwards, mustn’t it also be demonstrably true that time could flow in both directions?” It fails on its own terms, often dramatically. It is cold, and often tacky, tedious, and evasive. But to celebrate the work of the individual (if we are to continue with that practice), individual failures also be valued; must at least be understood in all their specifics. The mechanical failure of Tenet is gloriously profound! There is nothing left to do but to explain it.
A modest proposal! Most films, upon delivery to their distributors, are contractually required to produce a document called the Combined Continuity and Spotting List, or CCSL. The “spotting list” catalogs the precise timing of all of the movie’s dialogue, music, and sound effects. It is essential for the placement of subtitles. The “continuity list” does the same for a film’s visual information. Producing the continuity list is a far more involved task. And yet, compared to the spotting list, the continuity list serves little purpose. (Often, the producer of a film can talk the distributors into letting them off the hook with just a spotting list.) The practice of requiring a full CCSL is often justified by referring to it as a “legal document.” It is said that the CCSL, as an exhaustive description of a film’s sensory information, is legally equivalent to the film, and thus validates its existence. Using this same folkloric legal argument, I propose that the producers of a film should also be contractually responsible for producing its Ending Explained. CCSLEE. Taking into account the time and money required, it makes more sense to assign responsibility for a truly exhaustive Ending Explained to the production company than to dozens of competing film websites. With this understanding, and taking into account that at the highest product tier, a CCSL costs about $40 for every minute of the film, this Substack’s forthcoming Tenet Ending Explained should be valued at roughly $6,000.










