clock menu more-arrow no yes mobile

Filed under:

Don’t Tell Me What Happens. I’m Recording It.

Chuck Klosterman asks: What is the future of TV?

Ringer GIF

By Chuck Klosterman

Chuck Klosterman’s ninth book, But What If We’re Wrong?, considers the possibility that contemporary people might be incorrect about some of our most deeply held, fundamentally unquestioned opinions and beliefs. Subtitled Thinking About the Present As If It Were the Past, the book explores how (and why) societies in 100 or 300 or 1,000 years might hold radically altered memories of the literature, entertainment, science, and politics of the early 21st century, contradicting the way those concepts are considered in the present. The following excerpt visualizes how television will be remembered in a distant future when TV no longer exists.

Television is an art form where the relationship to technology supersedes everything else about it. It’s one realm of media where the medium is the message, without qualification. TV is not like other forms of consumer entertainment: It’s slippier and more dynamic, even when it’s dumb. We know people will always read, so we can project the future history of reading by considering the evolution of books. (Reading is a static experience.) We know music will always exist, so we can project a future history of rock ’n’ roll by placing it in context with other genres of music. The internal, physiological sensation of hearing a song today is roughly the same as it was in 1901. (The ingestion of sound is a static experience.) The machinery of cinema persistently progresses, but how we watch movies in public — and the communal role cinema occupies, particularly in regard to dating — has remained weirdly unchanged since the fifties. (Sitting in a dark theater with strangers is a static experience.) But this is not the case with television.

Both collectively and individually, the experience of watching TV in 2016 already feels totally disconnected from the experience of watching TV in 1996. I doubt the current structure of television will exist in two hundred fifty years, or even in twenty-five. People will still want cheap escapism, and something will certainly satisfy that desire (in the same way television does now). But whatever that something is won’t be anything like the television of today. It might be immersive and virtual (like a Star Trekian holodeck) or it might be mobile and open-sourced (like a universal YouTube, lodged inside our retinas). But it absolutely won’t be small groups of people, sitting together in the living room, staring at a two-dimensional thirty-one-inch rectangle for thirty consecutive minutes, consuming linear content packaged by a cable company.

Something will replace television, in the same way television replaced radio: through the process of addition. TV took the audio of radio and added visual images. The next tier of innovation will affix a third component, and that new component will make the previous iteration obsolete. I have no idea what that third element will be. But whatever it is will result in a chronological “freezing” of TV culture. Television will be remembered as a stand-alone medium that isn’t part of any larger continuum [footnote 1] — the most dominant force of the latter twentieth century, but a force tethered to the period of its primacy. And this will make retroactive interpretations of its artistic value particularly complicated.

Here’s what I mean: When something fits into a lucid, logical continuum, it’s generally remembered for how it (a) reinterprets the entity that influenced its creation, and (b) provides influence for whatever comes next. Take something like skiffle music — a musical genre defined by what it added to early-twentieth-century jazz (rhythmic primitivism) and by those individuals later inspired by it (rock artists of the British Invasion, most notably the Beatles). We think about skiffle outside of itself, as one piece of a multi-
dimensional puzzle. That won’t happen with television. It seems more probable that the entrenched memory of television will be like those massive stone statues on Easter Island: monoliths of creative disconnection. Its cultural imprint might be akin to the Apollo space program, a zeitgeist- driving superstructure that (suddenly) mattered more than everything around it, until it (suddenly) didn’t matter at all. There won’t be any debate over the importance of TV, because that has already been assured (if anything, historians might exaggerate its significance). What’s hazier are the particulars. Which specific TV programs will still matter centuries after the medium itself has been replaced? What TV content will resonate with future generations, even after the technological source of that content has become nonexistent?

These are queries that require a thought experiment.

2.

Let’s pretend archaeologists made a bizarre discovery: The ancient Egyptians had television. Now, don’t concern yourself with how this would have worked.[footnote 2] Just pretend it (somehow) happened, and that the Egyptian relationship to television was remarkably similar to our own. Moreover, this insane archaeological discovery is also insanely complete — we suddenly have access to all the TV shows the Egyptians watched between the years 3500 and 3300 BC. Every frame of this library would be (on some level) interesting. However, some frames would be way more interesting than others. From a sociological vantage point, the most compelling footage would be the national news, closely followed by the local news, closely followed by the commercials. But the least compelling material would be whatever the Egyptians classified as their version of “prestige” television.

The ancient Egyptian Breaking Bad, the ancient Egyptian House of Cards, the ancient Egyptian rendering of The Americans (which I suppose would be called The Egyptians and involve promiscuous spies from Qatna) — these would be of marginal significance. Why? Because the aesthetic strengths that make sophisticated TV programs superior to their peers do not translate over time. Looking backward, no one would care how good the acting was or how nuanced the plots were. Nobody would really care about the music or the lighting or the mood. These are artful, subjective qualities that matter in the present. What we’d actually want from ancient Egyptian television is a way to look directly into the past, in the same manner we look at Egyptian hieroglyphics without fixating on the color palette or the precision of scale. We’d want to see what their world looked like and how people lived. We would want to understand the experience of subsisting in a certain place during a certain time, from a source that wasn’t consciously trying to illustrate those specific traits (since conscious attempts at normalcy inevitably come with bias). What we’d want, ultimately, is “ancillary verisimilitude.” We’d want a TV show that provided the most realistic portrait of the society that created it, without the self-aware baggage embedded in any overt attempt at doing so. In this hypothetical scenario, the most accurate depiction of ancient Egypt would come from a fictional product that achieved this goal accidentally, without even trying. Because that’s the way it always is, with everything. True naturalism can only be a product of the unconscious.

So apply this philosophy to ourselves, and to our own version of televised culture: If we consider all possible criteria, what were the most accidentally realistic TV shows of all time? Which American TV programs — if watched by a curious person in a distant future — would latently represent how day-to-day American society actually was?

(Penguin Random House)
(Penguin Random House)

This is the kind of question even people who think about television for a living don’t think about very often. When I asked The Revolution Was Televised author Alan Sepinwall, he noted the “kitchen-sink realism” of sitcoms from the seventies (the grimy aesthetics of Taxi and the stagnation of Barney Miller, a cop show where the cops never left the office). New Yorker TV critic Emily Nussbaum suggested a handful of shows where the dialogue captured emotional inarticulation without the crutch of clichés (most notably the mid-nineties teen drama My So-Called Life). Still, it’s hard to view any of the programs cited by either as vehicles for understanding reality. This is not their fault, though: We’re not supposed to think about TV in this way. Television critics who obsess over the authenticity of picayune narrative details are like poetry professors consumed with penmanship. To attack True Detective or Lost or Twin Peaks as “unrealistic” is a willful misinterpretation of the intent. We don’t need television to accurately depict literal life, because life can literally be found by stepping outside. Television’s only real-time responsibility is to entertain. But that changes as years start to elapse. We don’t reinvestigate low culture with the expectation that it will entertain us a second time — the hope is that it will be instructive and revelatory, which sometimes works against the intentions of the creator. Take, for example, a series like Mad Men: Here was a show set in the New York advertising world of the 1960s, with a dogged emphasis on precise cultural references and era-specific details. The unspoken goal of Mad Men was to depict how the sixties “really” were. And to the present-day Mad Men viewer, that’s precisely how the show came across. The goal was achieved. But Mad Men defines the difference between ancillary verisimilitude and premeditated reconstruction. Mad Men cannot show us what life was like in the sixties. Mad Men can only show how life in the sixties came to be interpreted in the twenty-first century. Sociologically, Mad Men says more about the mind-set of 2007 than it does about the mind-set of 1967, in the same way Gunsmoke says more about the world of 1970 than the world of 1870. Compared to The Andy Griffith Show or Gilligan’s Island, a mediated construct like Mad Men looks infinitely more authentic — but it can’t be philosophically authentic, no matter how hard it tries. Its well-considered portrait of the sixties can’t be more real than the accidental sixties rooted in any 1964 episode of My Three Sons. Because those 1964 accidents are what 1964 actually was.

3.

My point is not that we’re communally misguided about which TV series are good, or that prestige programming should be ignored because the people who make it are too aware of what they’re doing. As a consumer, I’d argue the opposite. But right now, I’m focused on a different type of appreciation. I’m trying to think about TV as a dead medium — not as living art, but as art history (a process further convoluted by the ingrained reflex to never think about TV as “art,” even when it clearly is). This brand of analysis drives a certain type of person bonkers, because it ignores the conception of taste. Within this discussion, the quality of a program doesn’t matter; the assumption is that the future person considering these artifacts won’t be remotely concerned with entertainment value. My interest is utility. It’s a formalist assessment, focusing on all the things a (normal) person is not supposed to (normally) be cognizant of while watching any given TV show. Particularly . . .

  1. The way the characters talk.
  2. The machinations of the world the characters inhabit.
  3. The manner in which the show is filmed and presented.
  4. The degree to which “realness” is central to the show’s ethos.

That first quality is the most palpable and the least quantifiable. If anyone on a TV show employed the stilted, posh, mid-Atlantic accent of stage actors, it would instantly seem preposterous; outside a few notable exceptions, the goal of televised conversation is fashionable naturalism. But vocal delivery is only a fraction of this equation. There’s also the issue of word choice: It took decades for screenwriters to realize that no adults have ever walked into a tavern and said, “I’ll have a beer,” without noting what specific brand of beer they wanted [footnote 3] (an interaction between Kyle MacLachlan and Laura Dern in the 1986 theatrical film Blue Velvet is the first time I recall seeing the overt recognition of this). What’s even harder to compute is the relationship between a period’s depiction of conversation and the way people of that period were talking in real life. Did the average American father in 1957 truly talk to his kids the way Ward Cleaver talked to Wally and the Beaver? It doesn’t seem possible — but it was, in all likelihood, the way 1957 suburban fathers imagined they were speaking.

The way characters talk is connected to the second quality, but subtly. I classify “the machinations of the world” as the unspoken, internal rules that govern how characters exist. When these rules are illogical, the fictional world seems false; when the rules are rational, even a sci-fi fantasy realm can seem plausible. Throughout the 1970s, the most common narrative trope on a sitcom like Three’s Company or Laverne and Shirley was “the misunderstanding” — a character infers incorrect information about a different character, and that confusion drives the plot. What always felt unreal about those scenarios was the way no one ever addressed these misunderstandings aloud, even when that was the obvious solution. The flawed machinations of the seventies sitcom universe required all misunderstandings to last exactly twenty-two minutes. But when a show’s internal rules are good, the viewer is convinced that they’re seeing something close to life. When the rom-com series Catastrophe debuted on Amazon, a close friend tried to explain why the program seemed unusually true to him. “This is the first show I can ever remember,” he said, “where the characters laugh at each other’s jokes in a non-obnoxious way.” This seemingly simple idea was, in fact, pretty novel — prior to Catastrophe, individuals on sitcoms constantly made hilarious remarks that no one seemed to notice were hilarious. For decades, this was an unspoken, internal rule: No one laughs at anything. So seeing characters laugh naturally at things that were plainly funny was a new level of realness.

‘How I Met Your Mother’ (Getty Images)
‘How I Met Your Mother’ (Getty Images)

The way a TV show is photographed and staged (this is point number three) are industrial attributes that take advantage of viewers’ preexisting familiarity with the medium: When a fictional drama is filmed like a news documentary, audiences unconsciously absorb the action as extra-authentic (a scene shot from a single mobile perspective, like most of Friday Night Lights, always feels closer to reality than scenes captured with three stationary cameras, like most of How I Met Your Mother). It’s a technical choice that aligns with the fourth criterion, the extent to which the public recognition of authenticity informs the show’s success (a realization that didn’t happen in earnest until the 1980s, with shows like Hill Street Blues). Now, it’s possible that — in two hundred fifty years — those last two points may be less meaningful to whoever is excavating these artifacts. Viewers with no relationship to TV won’t be fooled by the perspective of the camera, and people living in a different time period won’t intuitively sense the relationship between the world they’re seeing and the world that was. But these points will still matter a little, because all four qualities are interrelated. They amplify each other. And whatever television program exemplifies these four qualities most successfully will ultimately have the most usefulness to whatever future people end up watching them. For these (yet-to-be-conceived) cultural historians, TV will be a portal into the past. It will be a way to psychically contact the late twentieth century with an intimacy and depth that can only come from visual fiction, without any need for imagination or speculation. It won’t be a personal, interpretive experience, like reading a book; it will be like the book is alive. Nothing will need to be mentally conjured. The semi-ancient world will just be there, moving and speaking in front of them, unchanged by the sands of time.

All of which leads to one central question: What TV show will this be?

Removed from context, it’s a question that can also be asked like this: What is the realest fake thing we’ve ever made on purpose?

I’m (slightly, but not really) embarrassed to admit that this is an inquiry I’ve been thinking about for my entire life, years before I ever had a financial incentive to do so. It is inexplicably hardwired into my brain. For as long as I can remember, whenever I watch any scripted TV show, part of my consciousness interrogates its relationship to reality. “Could this happen? Does this look the way it would actually look? Does this work the way it would actually work?” It does not matter if the details are factually impossible — if I’m watching Game of Thrones, I can readily accept that dragons exist. Yet I still wonder if the dragons on my TV are behaving in the way I believe real dragons would behave in reality. I still question the veracity of those dragons, and I instinctively analyze the real-world plausibility of a scenario that’s patently impossible. This is just the way I am, and I never had to try.

So I am ready for this question.

(And I’d better be, since I appear to be the only person asking it.)

The first candidate to consider — and the easiest candidate to discount — is reality television. As a genre, the social and generational importance of these shows is vastly underrated; they are postmodern picture windows. But they’re pretty worthless at demonstrating the one quality they all purport to deliver. Even if we take The Hills and Storage Wars and Keeping Up with the Kardashians at face value — that is to say, even if we’re willing to accept (or pretend) that these are normal people, behaving naturally in unnatural circumstances — the visual presentation makes no attempt at masking the falseness of the staging or the contrived banality of the conflicts. Nothing on TV looks faker than failed attempts at realism. A show like The Bachelor is instantly recognized (by pretty much everyone, including its intended audience) as a prefab version of how such events might theoretically play out in a distant actuality. No television show has ever had a more paradoxical title than MTV’s The Real World, which proved to be the paradoxical foundation of its success.

Programming that nakedly operates as a subcultural roman à clef actually gets a little closer. The early twenty-first century spawned a glut of these series: Empire (a fictionalized portrait of the “urban” music industry) and Entourage (a fictionalized portrait of the celebrity industry) were the most successful attempts, but others include Nashville (centered on the country music scene), Ballers (the post-NFL brain economy), UnREAL (the reality of reality TV), and Silicon Valley (a satire of the Bay Area tech bubble). None of these programs claim to depict actual events, but all compel viewers to connect characters with the real people who inspired them. The star of Empire is some inexact synthesis of Jay Z, Suge Knight, and Berry Gordy. The protagonist in Entourage was supposed to be a version of Entourage producer Mark Wahlberg, had Wahlberg experienced Leonardo DiCaprio’s career. There’s a venture capitalist on Silicon Valley based (at least partially) on a melding of billionaire Mark Cuban and online entrepreneur Sean Parker. Part of the pleasure these programs provide is an opportunity to make these Xerox associations — and once the connections calcify in viewers’ heads, they can effortlessly inject living public figures into fake story lines [footnote 4]. That intellectual transfer makes this programming far more watchable than the writing justifies. But this essential process, somewhat ironically, erodes the level of realism. It exaggerates every narrative detail and forces the characters to unload bushels of awkward exposition, simply because casual viewers won’t make those subtextual connections without heavy-handed guidance. Beyond a few key exceptions, simulacrum shows are soap operas, marketed as fantasies, geared toward mass audiences who don’t want to think very hard about what they’re watching. Characters need to invent ways to say, “This is who I’m supposed to be,” without saying so directly. Nothing in a simulacrum is accidental, so you end up with the opposite of naturalism: It’s bogus inside baseball, designed for outsiders who didn’t know anything to begin with. You can’t be real by trying to be real.

“Aha,” you might say to yourself after reading the previous sentence. “If you can’t be real by trying to be real, the inverse must be the answer. The path to TV realness must involve trying to be fake on purpose.” Well, not quite — although it does get closer. Television shows that make no attempt at tracing reality hold up better over time: the best episodes of The Twilight Zone, early Fox experiments like Herman’s Head and Get a Life, the stridently meta It’s Garry Shandling’s Show, and anything featuring Muppets. If a piece of art openly defines itself as 90 percent fake, whatever remains is legitimized (and it’s that final 10 percent that matters most). But a self-aware vehicle like Community or Mr. Show still collides with the reality-killing property of self-serious programs like Homeland or St. Elsewhere — premeditated consciousness. The former takes advantage of people’s knowledge that TV is not real; the latter does whatever it can to make people forget that this unreality is something they recognize. In both cases, the effort exposes the hand. For this to work, the people creating the TV program can’t be thinking about how real (or how unreal) the product seems. They need to be concerned with other issues, so that the realness is just the residue. And this kind of unintentional residue used to build up all the time, before TV decided to get good.

What I’m talking about, in essence, is a disrespected thirty-five-year window of time. The first Golden Age of Television started in the late 1940s and lasted until the demise of Playhouse 90 in 1960; this was a period when the newness of TV allowed for unprecedented innovations in populist entertainment. The second Golden Age of Television started in the late 1990s (with The Sopranos and Freaks and Geeks and the mass metabolizing of Seinfeld) and is just now starting to fade; this is a period when television was taken as seriously as film and literature. But as a reality hunter with a reality hunger, my thinking occupies the dark years in between. Throughout the 1970s and ’80s, watching TV was just what people did when there was nothing else to do. The idea of “appointment television” would have been considered absurd — if you missed a show, you missed it. It was not something to worry about. The family television was simply an appliance — a cathode box with the mentality of a mammary gland, actively converting couch owners into potatoes. To genuinely care about TV certified someone as a dullard, even to the dullards in the band Black Flag. This perception turned television into a pure commodity. The people writing and producing the shows were still smart and creative, but they were far less concerned with aesthetics or mechanics. There was no expectation that audiences would believe what they were seeing, so they just tried to entertain people (and to occasionally “confront them” with social issues). From a linguistic standpoint, this allowed for a colossal leap in realism. Particularly with the work of Norman Lear, the creator of long- running, heavily syndicated shows like All in the Family, The Jeffersons, Good Times, and One Day at a Time, it became possible for characters on television to use language that vaguely resembled that of actual humanoids. The only problem was that these productions still had the visual falseness of thirty-minute theatrical plays. The sets were constant reminders that this was not life. Archie and Edith Bunker’s living room furniture already resembled the museum installation it would eventually become. George Jefferson and Ann Romano [footnote 5] seemed more like symbols than citizens. It was not until the late 1980s that the residue really stuck, and most of it stuck to one specific vehicle: Roseanne. It wasn’t perfect, it wasn’t reasonable, and — sometimes — it wasn’t even clever. But Roseanne was the most accidentally realistic TV show there ever was.

‘Roseanne’ (Getty Images)
‘Roseanne’ (Getty Images)

The premise of Roseanne was not complex. Over time, it adopted an unrepentant ideology about gender and oppression. But that was not how it started. It was, in many ways, an inverted mirror of The Cosby Show: If The Cosby Show was an attempt to show that black families weren’t necessarily poor and underprivileged, Roseanne was an attempt to show how white families weren’t necessarily rich and functional. The show was built around (and subsequently named after) Roseanne Barr, a domineering comedic force from Colorado who did not give a fuck about any vision that wasn’t her own. John Goodman was cast as her husband. By the standards of TV, both of these people were wildly overweight. Yet what made Roseanne atypical was how rarely those weight issues were discussed. Roseanne was the first American TV show comfortable with the statistical reality that most Americans are fat. And it placed these fat people in a messy house, with most of the key interpersonal conversations happening in the kitchen or the garage or the laundry room. These fat people had three nongorgeous kids, and the kids complained constantly, and two of them were weird and one never smiled. Everything about Roseanne looked right. The house looked chaotic and unfinished — it looked like it had been decorated by people who were trying to trick themselves into believing they didn’t have a shitty house.

Roseanne ran for nine seasons, and the dialogue changed considerably over that span. The (popular) early years were structurally similar to other sitcoms; the (unpopular) final season was the equivalent of a twenty-four-episode dream sequence that canceled out almost everything that had come before. But there was realness residue from start to finish. Episodes would conclude with jarring, unresolved arguments. Barr was an untrained actress working with veteran performers, so scenes sometimes felt half rehearsed (not improvised, but uncontained by the normal rules of TV). There appeared to be no parameters on what could qualify as a normal conversation: An episode from the eighth season includes a sequence where Barr sits in the passenger seat of a car, reading Bikini Kill lyrics aloud. If these details strike you as immaterial, I understand — when described on paper, examples of ancillary verisimilitude usually sound like minor mistakes or illogical choices. And sometimes, that’s what they are — essential flaws that link a false reality to the real one.

So what does this mean? Am I arguing that future generations will watch Roseanne and recognize its genius? Am I arguing that they should watch it, for reasons our current generation can’t fully appreciate? Am I arguing that future generations might watch it, and (almost coincidentally) have a better understanding of our contemporary reality, even if they don’t realize it?

I don’t know.

I really don’t. It’s possible this debate doesn’t even belong in this book, or that it should be its own book. It’s a phenomenon with no willful intent and no discernible result. I’m not satisfied with what my conclusion says about the nature of realism. But I know this matters. I know there is something critical here we’re underestimating, and it has to do with television’s ability to make the present tense exist forever, in a way no other medium ever has. It’s not disposable, even if we want it to be. And someday, future potatoes will prove this.

Footnote 1: There’s a temptation to argue that television is part of a continuum, and that it represents the second step in a technological ladder that starts with radio and will continue through whatever mode eventually usurps network TV. There is, certainly, a mechanical lineage (the Paley Center for Media was originally known as the Museum of Television and Radio). But anecdotally, this will never happen. We will not connect the content of television with the content of whatever replaces it. The two experiences will be aesthetically incomparable, in the same way that TV and radio are incomparable. Over time, society simply stopped connecting the content of the radio era with the content of the TV era, even though many performers worked in both platforms and the original three networks started as radio outlets; from a consumer perspective, they just felt different, even when trafficking in the same milieu. For example, sitcoms were invented for radio. There were situation comedies on radio long before even the richest Americans owned TVs, and that includes a few sitcoms that were conceived on radio and jumped to the tube. But the experience of watching a sitcom was totally alien from the experience of hearing a sitcom. It altered things so much that the second definition became the universal definition. By 1980, using the word “sitcom” to describe anything that wasn’t a TV show required explanation. Its origin in radio is irrelevant, and we would never compare Cheers or M*A*S*H to something like Fibber McGee and Molly. They have a mechanical relationship, but not a practical one. They seem entwined only to the specific generation of people who happened to live through the transition. [Return to main text.]

Footnote 2: This, somewhat obviously, requires the mental evasion of certain critical details — the ancient Egyptians didn’t have electricity, they didn’t invent the camera, and it would still be at least 5,200 years before the birth of Shonda Rhimes. But don’t worry about the technical issues. Just assume the TVs ran on solar power and involved the condensation of river water and were sanctioned by Ra. [Return to main text.]

Footnote 3: I should note again that there’s also a popular line of thinking that argues against this type of realism. Some screenwriters feel that directly using an explicit example of any non-essential object dates the material and amplifies the significance of something that doesn’t really matter to the story; in other words, having a character ask for a specific brand name like “Heineken” (instead of the generic “beer”) forces the audience to notice the beverage a little too much, which might prompt them to read something into that transaction that detracts from the story. It imposes a meaning onto Heineken as a brand. But remember: If we’re looking backward from a distant future, we don’t care about the story, anyway. We want the scene to be dated. [Return to main text.]

Footnote 4: When a record producer on Nashville (“Liam McGuinnis”) was introduced into the story line, he appeared to be directly modeled after musician (and current Nashville resident) Jack White. I now see “Jack White” in every scene involving this character, which is unintentionally hilarious, especially since he constantly does things Jack White would never do, such as have sex with Connie Britton (a.k.a. “Rayna Jaymes,” who is 60 percent Reba McEntire, 25 percent Sara Evans, and 15 percent Faith Hill). [Return to main text.]

Footnote 5: As I note these characters, I find myself wondering how confusing it must be for readers born in (say) 1995 to contextualize the meaning of TV personalities from TV programs they’ve never even heard of. But something I’ve learned from lecturing at colleges is that young people read nonfiction books very differently from the way I once did; they instantaneously Google any cultural reference they don’t immediately comprehend. Learning about the life of Ann Romano is no different from learning about the life of Abe Lincoln. Due to Wikipedia, they’re both historical figures. [Return to main text.]

Excerpted from But What If We’re Wrong? by Chuck Klosterman. Copyright (c) 2016 by Chuck Klosterman. Reprinted by permission of Blue Rider Press.