Rethinking the Culture Wars

By

Rochelle Gurstein

  The long-raging Culture Wars have now made their way into our primary and secondary schools. This is the news Meghan Cox Gurdon, the children’s book critic for the Wall Street Journal, brings to us in sounding the alarm about the #DisruptTexts movement: “We now have a situation where English teachers in American schools oppose the teaching of Shakespeare lest students be hurt by the violence, misogyny, and racism in his plays.” On the one side are “activist teachers, critical-theory ideologues, and hashtag iconoclasts” who seek to dismantle “the white canon"—"the structure of privilege” that perpetuates “white supremacy and colonization"—in favor of a "more inclusive, representative, and equitable language arts curriculum that our students deserve” in which they “see themselves (with ‘the self’ defined according to narrow identity categories) mirrored”; on the other are people like Gurdon who tirelessly defend Shakespeare and “all classic works” as “humanistic expressions that transcend cultural boundaries,” as “humane, experimental, challenging stories that have earned their place in the literary pantheon because they have been so well-loved and influential.” That anyone who reads magazines like Salmagundi could have written a close facsimile of this description in advance speaks to how predictable this dispute, which had its start in the 1960s and more or less took its current brittle shape by the early ‘90s, has become. It is high time, then, that we reconsider some of its foundational concepts.
  The first thing to notice is that the question raised by advocates of #DisruptTexts—why should teachers attempt to “connect” “nonwhite children” with colonial writers like Shakespeare who are harmful to their sense of self-esteem?—is a politically-charged reworking of a more fundamental problem that neither party to today’s dispute seems aware of. One of its earliest statements appears in an article in the Nation in 1908, “The Teaching of Literature at College,” by a young Renaissance English professor at Amherst College, John Erskine. In the opening sentence, he announced, “We are familiar with the charge that young people no longer read good books. The average young person today seems to have no habitual recourse to that selected and stored-up wisdom of the race, which until our day has been a chief means and effect of culture.” This was the situation Erksine was facing in the college classroom: on the one hand, there was the persistence of long-admired books that had, since the time of Matthew Arnold, been regarded as the wellspring of culture; and on the other, these very same books had become “dead” to “life-loving youth.” Erskine was in quest of the middle term: what would make it possible for young readers to “see the life stored up” in them?
  He identified two “obstacles” and, as any teacher knows, they are still with us today. The first was the ceaseless flux of time, which always threatens to make the subject of a book “remote or strange.” The second spoke to a deficiency belonging to “young people” in particular: “they are imaginatively immature” and thus “unable to grasp the recorded experience” of the author. Erskine was convinced that his colleagues’ historical, philological, and biographical scholarship only put additional barriers between the pupil and the book. What was needed instead were teachers whose “enthusiasm in the classroom for the old favorites is unashamed.” The teacher’s “business” was “to supply,” whenever necessary, “an historical or imaginative approach to the book.” To which he added a declaration that would become a sacred article of faith of Great Books programs: “Beyond that it should appeal for itself.” The following year Erskine began teaching at Columbia College and he soon put this faith into action with a two-year “General Honors” seminar for selected juniors and seniors. (This seminar was systematized and expanded by Mortimer J. Adler and Robert Maynard Hutchins at the University of Chicago into the Great Books program and became a central component of the curriculum in many American colleges through the 1950s.) Erskine wanted “the boys to read great books, the best sellers of ancient times, as spontaneously and humanly as they would read current best sellers, and having read the books, I wanted them to form their opinions at once in a free-for-all discussion.”
  At the foundation of Erskine’s faith that great books teach themselves was the humanist axiom that human nature is everywhere and always the same. In a later essay, “On Reading Great Books” (1928), Erskine advanced a pared-down, biological conception of this axiom, which he spoke of as the elemental “facts of life” that belong to us all by virtue of being human: “birth, death, hunger, love, hate, the two sexes for ever facing their own attractions and antipathies.” To which he immediately added, “It is a fair guess that men will recognize these elements of experience for a long while.” This, in its popular form, is the rationale underlying the timeless, universal classic. Books that speak “spontaneously and humanly” to readers do so because they address these common “elements of experience”: “This is why we rank Homer and Virgil and Dante, Shakespeare, Chaucer, Cervantes and Molière so high—because they still say so much, even to people of an altogether foreign culture, a different past, an opposed philosophy.”
  Today it is little known that “classic” used to mean exemplar, a work that embodied the excellences of its practice and was taken by practitioners as a model for imitation. The classic was the single standard of taste and so long as authors and readers felt themselves at one with the aspirations, ideals, and methods of the practice, the classic had authority and legitimacy. The authors named by Erskine attained that status. But, by the twentieth century, fewer and fewer writers and readers had intimate knowledge of or any living connection to the particular literary practice and tradition which each of these writers had long exemplified. That is what Erskine acknowledged when he spoke of modern-day readers as “people of an altogether foreign culture, a different past, an opposed philosophy.” To overcome—or better yet, transcend­—such enormous discrepancies between the author and the reader, latter-day humanists like Erskine came up with the ingenious idea that great books treat the perennial themes and problems of human nature. The universality of human nature is what would keep “the classics of the Western world” alive once the basis of their former appreciation had receded from common memory. It is important for those of us (like myself) who believe that classics endure because they illuminate the deepest reaches of experience—or, in addition, because of their artistic intensity—to recognize how recent this idea is; the classic as timeless and universal does not exist before the turn of the twentieth century. And we also need to recognize that it grew out of an intellectual void. The timeless, universal classic, at least in retrospect, is best understood as a saving action.
  From the moment Erskine proposed his seminar and described his pedagogy, his colleagues at Columbia objected that such a “superficial” approach could only produce a “smattering of knowledge” in students. And over the years, Erskine’s approach was criticized for being partial and ultimately distorting, for it constructed a tradition—"the Western tradition"—out of a single aspect of a great writer’s oeuvre—how he or she treated an elemental problem of human nature—and then made all great writers who allegedly dealt with that problem contemporaneous, no matter their original concerns or the larger intellectual, aesthetic, moral, political, or theological context in which they were originally at home. By the 1950s, scholars in English and comparative literature departments were increasingly in conflict over the assumptions underlying their disciplines. There was a quarrel between New Critics with their creed of aesthetic autonomy and those who favored a more historical approach as well as disagreements with colleagues who had adopted the decidedly un-humanistic methods of the natural and social sciences.
  Still, R. S. Crane, in an extensive review of “the idea of the humanities” (the title of his 1953 essay) had no trouble identifying what humanistic study had been: “the ineradicable desire of men in all ages to understand and profit from what men have done when they have shown themselves to be most peculiarly human.” But he also felt compelled to raise a question that had never before occurred to teachers of the humanities and thus had never been satisfactorily answered, “How does the study of the humanities humanize”? That a humanist of Crane’s stature could formulate this question—and a number of his contemporaries were raising similar questions—does not quite indicate the same level of crisis as theologians discussing whether God is dead, but it was a sign of the increasingly disordered condition of the humanities and with it, the idea of the timeless, universal classic.
  These doubts and rival approaches within the academy were being played out against the background of a capitalist, technological, scientific, mass society that had long been (and continues to be) dismissive about or actively hostile to the study of the humanities. And then in the 1960s, the Great Books program came under siege from another and unexpected quarter: disaffected students, who demanded a revamped curriculum that would offer classes “relevant” to them, supported by a sufficient number of faculty members, so that by the end of the ‘60s the first programs in Women’s and Black studies were established. In 1971, Lionel Trilling, as he looked back at the rise to prominence of the “Great Books” program initiated by Erskine, announced, “It is now, I need scarcely remark, in eclipse. Even in Columbia College it is in the process of being attenuated and I believe that it will soon be wholly rejected.” Trilling, who had taken the General Honors course as a student, spoke of it as a “salutary and decisive experience.” Reading the “classics of the Western world” showed him and his fellow classmates “how they might escape from the limitations of their middle-class or lower-middle-class upbringings by putting before them great models of thought, feeling, and imagination, and great issues which suggested the close interrelation of the private and personal life with the public life, with life in society.”
  Trilling’s prediction that Great Books courses would be “wholly rejected” turned out to be premature, even as controversies about them have continued unabated. One of the early flashpoints in our current Culture Wars was ignited by Allan Bloom’s polemical defense of the Great Books, The Closing of the American Mind: How Higher Education Has Failed Democracy and Impoverished the Souls of Today’s Students (1987), which became a surprising bestseller. Back then, Bloom’s primary target was the moral relativism plaguing elite institutions. He blamed movements within humanistic disciplines (e.g., deconstructionism in literature, logical positivism in philosophy) and the larger cultural forces (mass-produced culture, in particular rock music) that had shaped the students he was teaching at the University of Chicago. That same year, another controversy erupted at Stanford University over its “Western Culture” program when radical students and a new generation of faculty members criticized its design as too “male-dominated” and “Eurocentric” and demanded a more “diverse,” “multicultural,” “global” set of texts. Like Bloom’s book, it surprisingly entered the public spotlight, as the New York Times and other popular media devoted extended coverage to it. Readers of Salmagundi do not need to be reminded of the many battles of this sort that have convulsed institutions of higher learning, art museums, and the culture at large since then. But it is worth noticing that defenders of the Great Books no longer speak with the authority and assurance of Allan Bloom; neither do they take as much pleasure, at least not in public, in ridiculing their opponents, as Harold Bloom did in his Western Canon (1994), when he identified the enemy as the “School of Resentment"—not only advocates of deconstruction, but also multiculturalism, Marxism, feminism, neo-conservativism, Afrocentrism, and the New Historicism. Today Bloom would no doubt add critical race theory to his list.
  What is now clear is that for the last sixty years not only "imaginatively immature” students (as Erskine described them) but an increasing number of college professors have felt a gap —ever-growing and apparently irremediable—between the classic works of literature they are supposed to teach (and read) and their own personal experiences and the social and political preoccupations of their time. To them, the justifications that champions of the classics have offered since the beginning of the last century sound arbitrary, hollow, or more perniciously, like alibis for maintaining existing power relations. The “activist teachers, critical-theory ideologues, and hashtag iconoclasts” who oppose teaching Shakespeare do not believe that human nature, boiled down to its most elemental, essentialist components, is universal. They take for granted that it is “socially constructed” and that humanist justifications of the canon rooted in it excludes—and oppresses—them and their students.
  In consequence, they are insensible to what Gurdon describes as the “near-miracle of invention, wit, and pathos” of Shakespeare’s plays and deaf to the music of his poetry. All that is tragic and moving, comic and delightful, mysterious, beautiful, humane in Shakespeare (this list could go on) is spoiled by the sexism, racism, homophobia, and cultural imperialism of what in the '80s was called the “elitist” ideology of “dead white European males” and what today goes under the more all-encompassing rubric of “the white canon.” They do not care when they are told that they are blinded by their prejudices; they are not moved when they are berated for unfairly holding authors of the past to standards alien to them; they are not insulted when they are criticized as temporally provincial; they are not concerned when they are warned that to banish authors who were loved for so long is to deprive ourselves and our culture of the dimension of depth. They know they have progress on their side and the moral urgency of their cause.
  This of course is not the first time in history that authors and readers—and artists and art lovers—have repudiated the status quo, though it is hard to find a pitched battle over the canon before the nineteenth century. I have recently completed a book that traces the history of the idea of the timeless, universal classic in the realm of art and I want to briefly turn to an earlier, defining episode. That is when modern art, as exemplified by Cezanne, Van Gogh, Gauguin, Picasso, and Matisse, first appeared in the English-speaking world in a now-famous exhibition, “Manet and the Post-Impressionists,” that Roger Fry organized at the Grafton Galleries in London in 1910. Fry reported that cultivated viewers, accustomed to admiring “the skill with which the artist produced illusion” in a picture, were scandalized—insulted, outraged—by pictures that left behind the “descriptive imitation of natural forms,” which had been the aim of art since the Renaissance. Fry was adamant that this was the wrong lens through which to view modern artists: “They do not seek to imitate form, but to create form; not to imitate life, but to find an equivalent for life.” As Fry tried to capture what this “equivalent for life” looks like, we learn that “form,” for modern artists, does not refer to representations of anything outside the picture—nature, objects, persons, or events in real life—but rather to the internally consistent, self-contained system of the created reality of the picture: “By that I mean that they wish to make images which by the clearness of their logical structure, and by their closely knit unity of texture, shall appeal to our disinterested and contemplative imagination with something of the same vividness as the things of actual life appeal to our practical activities.”
  There is a great deal more to be said about this exhibition, but for my present purposes, I want to note the reaction of Robert Ross, the art critic for the Morning Post, who denounced it as “a widespread plot to destroy the whole fabric of European painting.” Ross, a friend to Oscar Wilde and Fry, was anything but a stuffy Victorian, yet when it came to modern art, he had no idea he was on the wrong side of history—and he was not alone among cultivated viewers. The “revolutionary” art of Cezanne and the many soon-to-be famous artists who followed in his path did nothing less, as Fry put it, than redefine “the very purpose and aim as well as the methods of pictorial and plastic art.” In Fry’s pathbreaking volume of essays, Vision and Design (1920), we find this “revolution” extended to the very idea of what constituted art. Fry believed the principles artists discovered in their search to express “the sensibilities of the modern outlook"—"the principles of structural design and harmony"—belonged to all works of art, no matter whether the object was made "seven hundred years ago in China or in New York yesterday.” With this new awareness of the primacy of “pure form,” painters and objects that had previously been found deficient by the old “standard of skill in representation” could for the first time be properly appreciated. Through painstaking analysis of “the spatial relations of plastic volumes” of particular works, Fry taught his readers to appreciate not only early Christian fresco-painters who came before the wrong turn of the “fervid pursuit of naturalistic representation” of the Renaissance—Cimabue, Duccio, Giotto, Bellini, Durer, et. al—but also works that had no relation to the Renaissance or to the classical sculpture that had been its foundation—South African and Australian “Bushmen” drawings, “Paleolithic art,” African, Mayan, and Aztec sculpture, Chinese ceramics and bronzes, Japanese ink drawings and screen paintings.
  Fry’s “purely aesthetic criteria” thus dramatically reshaped the canon. And this new strictly formal approach also made it possible for him to resuscitate, for a brief period, a few Renaissance masterpieces, like Raphael’s long-revered Transfiguration, which had been spoiled for modern viewers by its religious subject and the “rhetorical insincerity” of its classical style. Here we see how aesthetic autonomy played the same role in saving long-beloved but threadbare masterpieces in art that the constancy of human nature played in saving fading great books. With this exclusive focus on the disinterested contemplation of form came a distillation and intensification of aesthetic feeling never before experienced by earlier art lovers. But, as was the case with the move to the constancy of human nature as the justification of Great Books, formal analysis came into existence because modern viewers no longer embraced or even knew the original artistic aims or, even more pointedly, the original religious, ritual, or quotidian uses of objects from so many different places, times, and traditions. It, too, grew out of an intellectual void.
  Paradigm shifts of this magnitude are consequential. Artists stopped making grand figurative marble sculpture and grand figurative history paintings, long held to be the highest genres in the classical hierarchy of genres. And few people today can conjure an image of the Venus de’ Medici, the Apollo Belvedere, or the Laocoön, the ancient sculptures rediscovered during the great building projects of the Renaissance that became the exemplars of ideal beauty for four centuries, nor do they have a vivid impression of a fresco by Raphael, who during those same four centuries, was revered above Michelangelo and Leonardo—today’s Renaissance favorites. Practices of art and treasured works—along with the many drawings, paintings, sculptures, poems, literary appreciations, and quarrels that they inspired—disappear, even if new ones take their place. The question raised by our seemingly interminable quarrel over the canon is whether we are living through a similar decisive moment. By the 1980s, intellectual and aesthetic shifts were so pronounced that a number of theorists could declare that we had entered a new era, “postmodernism.” Its champions confidently characterized it as anti-humanist and anti-aesthetic—that is, they repudiated the justifications for the classic that modern scholars and art lovers had believed were timeless and universal. Whether this shift will be as consequential as the one that preceded it still remains to be seen.