On Authority: Why Paying Attention to How We Pay Attention Matters Most

The world did not sit idly by while I studied for my final doctoral exam this summer. While I read nineteenth-century science textbooks, philosophical treatises, works of natural theology, university lectures, experiment write-ups, and a range of fictional accounts involving the natural world, violence swelled about me—about us all—in its many awful forms.

Outside mainstream news, I knew Syria, Myanmar, and the Central African Republic were still sites of strife and potential genocide. Meanwhile the urgency of other brutalities, like news of the kidnapped young women in Nigeria (who for the most part remain prisoners today, along with newer victims of Boko Haram), had begun to fade in the Western press—still horrific, but nowhere near as immediate as word of the tortured and murdered teens initially said to have instigated fresh hostilities along the Gaza Strip. Against the photo essays that emerged from Israel’s ensuing airstrikes—the pain and the loss and the fear writ large on so many faces—events elsewhere took an inevitably muted role in Western coverage.

Even macabre word of a new organization sweeping across Iraq, openly murdering religious minorities or otherwise cutting off access to resources and regional escape, did not gain as much media traction for most of the summer, despite these events prompting forms of American aid-based intervention. Rather, it would take the beheadings of American and British citizens for ISIS to take centre stage in Western media—a position it currently occupies alongside a sweeping NFL scandal in which celebrity sports stand indicted for their role in a broader culture of domestic abuse.

Granted, the US had other issues in the interim between Israel and Iraq, with widely resonant protests emerging from Ferguson, Missouri, after police officer Darren Wilson shot and killed Michael Brown on August 9. (For some sense of the event’s global reach, consider that Palestinians were sending online tips to American protestors in the wake of aggressive anti-demonstration tactics.) And while the racialized outcry underlying this situation should have come as no surprise to most, the “worst Ebola outbreak in history”—which reached Global Health Emergency status just days before Ferguson, and has continued to spread since—revealed a systemic battleground all its own: A vaccine exists. Canada has “donated” 800–1,000 doses. The WHO will “decide” how these are dispersed.

(And speaking of Canada, lest I be labelled one of those northerners who calls attention to US crises without looking inward: This summer’s news also brought into focus our own, systemic issues with the indigenous community — peoples at higher risk of lifelong poverty, incarceration, all manner of abuse and disease, going missing, and turning up dead.)

The above are by no means overtly malevolent facts—the WHO, for instance, is attempting “ring vaccination” while efforts to accelerate drug production proceed—but the systems of power these terms invoke do exemplify the very status quo of global disparity (in overall affluence, levels of health education, and resource mobility) that foments such virulent outbreaks in the first place. There is violence, in other words, even in systems that seem orderly and objective, and we cannot ever discount the role that language plays in reinforcing a deadly world.

With this in mind, I was struck this summer by both how much and how little attention the rhetoric of authority claims received amid this coverage. In the “much” column we have, for instance, the work of Gilad Lotan, who crunched an immense amount of social media data to identify the information silos in which followers of Israeli-Gazan conflict tended to position themselves—each “side” receiving and reinforcing different news items through like-minded media outlets. We also have reflections like that of John Macpherson, who explored the professional tension between objectivity and emotion in photojournalism, and just recently, a poll of Ferguson’s citizens, which indicates an extreme racial divide among individuals tasked with interpreting the events of August 9.

But underlying these pieces is also the paucity of self-reflection: Lotan’s data sets would not be as impressive if the vast bulk of readers were not so starkly divided in their consumption of news media. Nor, too, would the recent Ferguson poll pack quite the wallop without so many participants deciding definitively, incompatibly, and above all else culturally what happened between Darren Wilson and Michael Brown.

I should not be surprised, granted: The study of “how we know what we know” is a difficult one even when we raise up children with the vocabulary to consider such questions (whatever that vocabulary might be: a point to be revisited below)—and an almost Herculean task once we, as adults, have settled into a way of thinking that seems to serve us well.

Indeed, though I spent a great deal of time thinking abstractly about knowledge this summer, I also often found myself confronted by articles so provocative, I had to share them immediately, and to rail vehemently about some key point therein. Each time I indulged, though, one of two things happened: I either did research soon after that undermined the legitimacy of the original piece, or found myself too deeply affected on an emotional level to engage with earnest responses in any other register.

Knee-jerk reactions are, of course, common, and understandably so. In a practical, everyday sense, we almost by necessity draw conclusions that gesture towards deductive reasoning, while actually better resembling glorified gut instinct: The sun will come up because it always comes up. You just missed the bus because you always just miss the bus. That danged table leg caught your toe because it always catches your toe. And so the fuzzy thinking builds up.

Above all else, we acclimate to what is familiar. We grow comfortable in whatever strange, personal assumptions have long gone uncontested. Our propensity for confirmation bias then fills in the gaps: We know that an article retraction never fully dislodges belief in the original article. We know that narratives aligning with pre-existing beliefs will continue to be referenced even when shown to be untrue. We know that following an event in progress, acquiring facts as they unfold, invariably leads to high amounts of inaccurate knowledge difficult to dislodge.

So it is with human beings, who have jobs to go to, children to care for, relatives and friends to attend to, and the self to care for after hours. How might we even begin to overcome such shortcomings when the cadence of our lives often seems antithetical to deep reflection?

The dangerous answer has often been the unquestioned affectation of an orderly and objective system of thought. This differs from an actually objective system of thought—an unattainable ideal—in that, even if we can provide a full and logical progression from propositions A to B to C, the validity of these propositions will still inevitably be tied to their cultural context, from whence more of that messy, glorified gut instinct emerges.

As a doctoral student, I study science writing in the nineteenth century, so I have seen this flawed affectation play out throughout history. In particular, I can demonstrate how the rhetorical strategies of inference and analogy allow respected authors to leap from specific sets of empirical knowledge to broader, un-evidenced claims that just happen to match up with the authors’ pre-existing views on, say, “savage” cultures or female intellect. This is by no means a mark against empirical evidence; human beings are simply very good at making unjustified connections between data sets, especially when such connections can reaffirm what one already believes.

Similarly, in the Ferguson shooting and Gazan conflict this summer, “facts” quickly flew into the realm of inference, with terms like “witness” and “casualty” and “discrepancy” taking on markedly different characters depending on their source. Just as Michael Brown underwent three autopsies, so too has all manner of “hard” data in both cases been sifted through to exaggerated effect, and always with that human inclination towards finding a story that fits by any means, however loosely deductive.

In short, the danger of affected objectivity is that it cannot exist apart from the irrationality of those human beings applying it. Nonetheless, the “fair and balanced” approach to a given situation is often positioned as the only “reasonable” path to change or justice—a claim wholly disregarding that, for millions of human beings, the groundswell of personal experience, community anecdote, and emotional outpouring is the only “truth” that ever matters in the wake of violent world events.

When dealing with the rhetoric of authority claims, and their role in how we respond to violent events, our crisis thus cannot get much simpler than this: We need to recognize ourselves as emotional beings, with emotional prejudices derived from everyday experience as much as personal trauma, when engaging with narratives of violence—narratives, that is, which by their very nature tend to be emotionally charged.

This is not about ceding the role of logic in response to serious and dramatic world and local events. Nor is this about forsaking all available evidence and refusing ever to make safe assumptions about a given situation or issue. This is simply about recognizing ourselves as predisposed filters of wide swaths of competing information, and making an effort not to act as though we have a monopoly on truth in situations involving other human beings.

This may seem straightforward, but our patterns of news consumption this summer, as well as the activist strategies that emerged in response to a variety of issues, suggest we have a long way to go. While protests tend to arise from an urgent and legitimate place of outrage, an effective response to systemic abuses must not be based solely on popular outcry, or else it risks establishing a “rule of mob” in place of “rule of law”. Conversely, though, the rhetoric of “rule of law” and affectations of objectivity go hand-in-hand: If the former is not sufficient to address a given crisis, we have to take even greater care with the sort of authority claims we accept, lest they obscure any truly drastic changes needed to better our world.

Back in Business

It’s been a long slog of a summer, and my return from vacation left me with a surprise that will be no surprise to anyone in academia: The slog never ends!

Nevertheless, I’ve been holding off on much in the way of commentary while trying to focus on my doctoral studies, and so very much has suffered for it. A writer needs to breathe in many directions, and this myopic attention to academic detail has caused more in the way of anxiety than necessary. So! It’s time to get back to posting here, and writing and submitting fiction, and generally returning to the fray of real-world debate. I look forward to it all.

One quick note for the moment: I celebrated a wee publication in my relative absence from this blog. “The Last Lawsuit” was published in Bastion Magazine, a newer online venue of rising acclaim. This publication of mine is the last coming down the pipe at the moment, due to my incredibly poor submissions schedule these last few months, but it was a pleasure to work with the editor-in-chief, R. Leigh Hennig (and the art and other content was pretty nice, too!). Now it’s time to knuckle down, write, and submit anew.

I hope everyone reading this has had far better luck maintaining “balance” between all their interests this last while. Best wishes to you all!

At Immortality’s End: The Bone Clocks and the Trouble with Grand Narratives


The Bone Clocks
David Mitchell
Random House

Since completing my doctoral candidacy exams last week, I’ve been reading ravenously, for pleasure. First I romped through some classic Philip K. Dick, then delighted for complicated reasons in Mary Doria Russell’s The Sparrow, and began and finished David Mitchell’s The Bone Clocks on a very delayed set of flights to San Francisco, where I’m enjoying a week’s reprieve before returning to my studies and writing in earnest.

I debated reviewing Mitchell’s latest novel, a work operating in the wake of Cloud Atlas‘s immense success, and thus aspiring to the same sort of grand, unified theory of human history with fantastical elements. There were parts of the work that especially weighed against my decision to review it, not least being the tediously rehashed convictions that men and women are essentially from other planets where love, sex, marriage, and the like are concerned. It may be true for some people, but I know plenty of human beings for whom that kind of trope is beyond worn out, and I wondered how much time I wanted to give to a novel that presents such views as universal truths even when it has characters gender-flipping through the centuries.

(I should probably add, as a caveat, that some of this banality certainly belongs to the characters–one of whom, a smug 21-year-old with access to wealth and an elite education, offers up euphemisms for his sexual conquests that I don’t doubt his character would think extremely witty, but which might easily earn Mitchell a Bad Sex in Fiction Award next year. There’s a difference between individual characters espousing dated views, though, and the author providing nothing but such views across character perspectives, across time, and throwing in some straw-people to further reinforce gendered caricatures.)

Ultimately, though, I wanted to reflect on one of the themes Mitchell is clearly wrestling with, because his work is, if nothing else, “ambitious”–a term usually given to a sprawling, dreaming text when the reviewer wants to avoid answering whether or not all that ambition pays off. You might have heard of this book as the one that starts off “normal” and then descends into vampirism; this, I would say, is a wilfully obtuse way of reading an attempt to construct a different kind of supernatural villain that preys on human beings. (Then again, when Dracula was first released, some reviews referred to the monster as a “werewolf” because the vampire archetype hadn’t stabilized yet, so perhaps it’s inevitable that some people will flatten the new and strange to fit existing moulds.)

This does touch, however, on a more important question I think is being asked of Mitchell’s work, which starts in 1985 and ends in 2043: Did this book need to shift into fantasy and “cli-fi” (speculative writing predicated on climate change anxieties) in order to wrestle with its central theme?

Immortality, as much of the book grapples with the subject, is a mundane crisis in most of its incarnations: We start out with 15-year-old Holly Sykes, waking in the radiant, urgent glow of a love she is convinced will last forever. This book spans her lifetime, but never again with as much verve and intimacy as the opening section, in which this conviction is swiftly quashed and consequences follow. The next section introduces Hugo Lamb, a callous young man who has a fleeting encounter with Holly before choosing a different path towards immortality. The section after that involves a war journalist whose desire to reclaim lost histories is a toss-up between trying to leave his mark in the world and the compulsive behaviour of an adrenalin junkie. Then we enter the most self-conscious, mid-life-crisis-y section of the book: a set of chapters written about one Crispin Hershey, an author struggling with everything from vicious to lacklustre responses to the work following his last, wildly successful novel.

After this, though, a plot that’s been mounting in the sidelines finally comes to the fore. To this end “Marinus”, one of the involuntary immortals who slip from lifetime to lifetime without doing any harm to their hosts, narrates a series of escalating stand-offs against the Anchorites, a secret order of human beings who prolong their lives and gain access to immense “psychosoteric” powers by sacrificing innocent people with active chakras. So you can see why this additional immortality crisis, entirely predicated on a fantastical set of events, might be a bit jarring for some: The anxieties and approaches to immortality addressed in prior sections lie at the heart of most literary fiction: How to live a good life. How to be a good person. How to make a difference. For people seeking answers to (or at least new reflections on) such questions, Mitchell’s choice to play out this theme in an otherworldly arena is unlikely to satisfy.

Ultimately, though, I sense that Mitchell knows this, because when we return to Holly Sykes, at a time when civilization as we know it is breaking down due to climate change, her part in preceding events has passed from immediate relevance to something akin to a dream. An adventure has happened, certainly–one among many–and the course of her life has exposed her to a wide range of engagements with attempts to live well. (And if you get more than a slight whiff of The Road in ensuing proceedings, you’re certainly not alone.) But to what end?

I’m struck, then, by The Bone Clocks on a few levels, because like Cloud Atlas it builds from everyday lives and reaches for a grander narrative of human existence–and like Cloud Atlas, I think it falls short because its grander narrative is inevitably reliant on the fantastic for continuity and closure.

Self-commiserating diversions aside, though, the book offers sections of very smart prose, and Mitchell goes to great lengths to build a global narrative (this mostly-British work also spanning locales like feudal Russia, Aboriginal Australia, and post-9/11 Iraq through flashbacks, secondary narratives, and exposition dumps)–so I don’t consider Mitchell’s book incompetent by any measure. Indeed, I enjoyed large swaths of it, but I was still left with two feelings at the end: One, that the ending does not satisfy; and two, that the ending cannot satisfy, because the push to immortalize ourselves–through children, through work, through love, through fame, through heroics–is always precarious, and perhaps just as always futile.

If Mitchell’s The Bone Clocks comprises a full range of human endeavours towards immortality, then, how can it be anything but a chronicle of failures, both personal and species-wide? The question I’m left with is thus not whether Mitchell’s “ambitious” book succeeded, whatever that might mean; I simply wonder what challenges such a thematic dead-end offers up to future writers. Do any of us know just how to break the mould?

Revolution Express: One Big Train with a Whole Mess of Semi-Allegorical Parts


Snowpiercer
Joon-Ho Bong
MoHo Films / Opus Pictures

One of the most amusing reviews I’ve read of Joon-Ho Bong’s first English-language film forwards the complaint that, for a movie set within a relentlessly speeding train, there sure are a lot of still camera shots in Snowpiercer. Since this train is the entire universe for the last survivors of humanity, after a failed attempt to halt global warming turned the world into a popsicle 17 years ago, such a grievance is equivalent to complaining about any film set on Earth that doesn’t constantly remind us that we’re hurtling at 108,000 km/h around the sun, and 792,000 km/h around the centre of the galaxy.

Though there are few poor reviews of Snowpiercer, what others exist likewise call attention, in their fascinating nitpicking, to the tenuous tightrope Bong walks between realism and allegory, especially within the Western film tradition. Abrupt tonal and mythological shifts in this narrative–which would have been completely at home in South Korean, Chinese, or Japanese cinemas–here serve as a reminder of how literalist North American scifi/action films tend to be (Elysium, for instance, and The Purge, and pretty much any apocalyptic/post-apocalyptic film in recent years, with perhaps the exception of The Book of Eli.)

Snowpiercer is in fact two stories: the story of one lower-class revolt on a stringently class-based train, and a metacommentary on the closed-system nature of stories about revolution in general. I’m not speaking figuratively, either: Bong makes that second story damned clear when he repeatedly emphasizes the existence of a sketch artist among the lower-class “tail section” denizens, whose work raises to hero and martyr status those involved in resistance efforts. In short, the story of the revolt is being created alongside the revolt itself–a fact a later character will also, more blatantly observe when All Is Revealed.

In consequence, if you simply focus on all the details in this film they will surely drive you batty: how little sense various population numbers make in their contexts; why an opening title card lies to you; why anyone with backstory X should be repulsed by the truth of the tail-section’s food source; how this speedy train takes a whole year to make one trip over its entire route; and why a machine part that performs a fairly straightforward, automatic motion could not be replaced when so many other, luxury items seem to appear from nowhere.

There are two ways to respond to these inconsistencies. The first is the route taken by many a negative reviewer–to hold these and the initial presentation of stock archetypes against the director, as signs that this film is too ridiculous and/or two-dimensional to merit serious consideration. The second is to remember that the entire premise of this film is completely bonkers–a massive train filled with the last of humanity, which makes a circuit of the whole world at breakneck speed during apocalyptic fallout–and then to assume that the director is aware of this absurdity, and to start paying attention to other weird elements therein. These include, but are not limited to: the mystic sensibilities of Yona (Ah-sung Ko); the hyperbolic absurdity in certain cars (e.g. the New Year’s scene, and the classroom car); a synchronicity between the mood in a given first-class car and the concurrent mindset of our presumed hero, Curtis (Chris Evans); the fight scenes that immediately become legend; the seeming impossibility of killing off certain bad guys; and the staging of various human beings in thematic tableau (right down to colour contrasts) for the final stand-off.

Read together, Snowpiercer is very clearly meant as an allegory–and not just for class struggle in our world but also for how class struggle can itself be a tool of oppression. As an indictment of audience expectations, Bong’s latest also has something to say about how the conventional yearning for new leadership is in many ways no revolution at all. He does this, too, while having characters resist both Western and Eastern conventions: While Yona’s father (Kang-ho Song) tries to preserve her in the role of passive participant (following a men-act/women-dream motif that emerges in quite a bit of South Korean cinema), Yona breaks from this role in the third act, while Octavia Spencer, as Tanya, gets to be part of the revolt all throughout (instead of wringing her hands while strapping menfolk try to recover her child). There’s definitely something to be admired, too, about not treating a wildly improbable scifi/action premise as anything more than mythopoetic backdrop.

However, much as I disagree with the nitpicking of negative reviews for this film, and much as I see myself watching this film a second time down the line, I did find the quest template so mundane at times that the film never quite stabilizes on this higher, allegorical plateau. Yes, Snowpiercer shatters all its stock hero archetypes in the end, but for most of the film Curtis is still the young, reluctant leader of a revolution against Wilford (Ed Harris), the man who built and runs the train, while John Hurt plays a fairly standard Wise Old Mentor in Gilliam, and Jamie Bell, your standard young recruit to the cause (Edgar).

Moreover, one first-class enemy, Mason (Tilda Swinton), does not acquire full coherence as a character until a gesture during a provocative early speech is repeated near the film’s end; and even then, if you accept what’s implied about her character at that juncture, it means accepting yet another inconsistency–this one related to Curtis’s supposed uniqueness among all on board the train. (But if we don’t accept it, we’re left with just another a two-dimensional villain, so it’s a tough call.) When we do get to the end, too, the relentless exposition dumps (another standard feature of quest narratives) are themselves played straight–which means that after almost two hours spent undermining the typical quest narrative’s structure, we’re left with a pretty boilerplate reversion to form for the close.

Maybe this is just the nature of the beast, though; maybe there really is no escaping the world–its most familiar narrative traditions–that train. What would it mean for us to step utterly outside all three? And what alternatives might be waiting for us if we did? For all that it might wobble on its tracks, Snowpiercer moves with distinct thematic purpose–even if its final destination seems an eternity from sight.

Excerpts from the Day’s Readings: Auguste Comte and Slippery Mental Frameworks

I’ve been sitting on a more reflective post, but the long slog of daily readings persists, along with a few other inanities of doctoral student life, so that essay will have to wait awhile. For now I’m just under a month from my last (I hope) major doctoral exam, and today’s readings are two books by Victorian thinkers (John Stuart Mill and G. H. Lewes) on the early philosophy of continental writer Auguste Comte, best known as the father of positivism, but also as a critical figure in the rise of humanism.

This last is an especially intriguing point for modern reflection. I’m not a fan of atheist “churches” or related assemblies, as are forwarded by some non-religious people today, but I’m even less a fan of pretending that these ideas are anything new. Put simply, in his later life, after struggling with mental health concerns both within and outside institutions, divorcing in 1842 (a rare and serious act in the 19th century), and losing a close platonic friend in 1846, Comte’s views and applications of positivism changed drastically… and he created his own, secular church.

Indeed, from the ideal of this dead friend, Clotilde de Vaux, as a moral paragon in her feminine virtue, came the “Religion of Humanity”–a ritualistic faith patterned after the Catholic Church, with Comte as the “high priest” (read: pope), and women and the working class primarily targeted for conversion therein. As Richard G. Olson notes in Science and Scientism in Nineteenth-Century Europe, this whole movement also prompted quite a few philosophical reversals for Comte. For instance: “Whereas in the Positive Philosophy Comte had complained that sociology had heretofore been crippled by the failure to subordinate imagination to reason, in the Positive Polity he touted the importance of imagination and claimed that ‘Positivism is eminently calculated to call the imaginative faculties into exercise.'”

This is the sort of stark mental shift one must expect as plausible in all human beings–even (or perhaps especially) those who significantly contribute to a number of fields. And sure enough, Comte made a significant impact on both the philosophy of science and the development of the social sciences. To this end, he outlined three stages of societal development towards the apprehension of truths: the “theological”, which has three phases of interpreting active, conscious wills at work in the world; the “metaphysical”, which involves abstract, idealized concepts that nature moves toward without the need for conscious wills; and finally the “positive”, in which one recognizes the critical role of observation and positive verification of hypotheses in the physical sciences before the benefits of empiricism can be turned to the study of human society. From this framework emerges a coherent rationale for valuing experimental findings when seeking to describe the world: in short, a 19th-century advancement of the scientific method.

It bears noting, too, that Comte’s negotiation of both the physical sciences and the social sciences held serious philosophical weight at the time. This was an era when philosophical doctrines like utilitarianism, which advocates as morally correct whatever action maximizes the overall good, were crudely applied to sociopolitical theory on the back of ill-gotten and ill-used “facts” about the world. As I mentioned in my post about John Kidd, for instance, there were certainly men of prominence with skewed notions of what the overall “good” looked like: Leaning on natural theology, Kidd especially argued that Britain’s social welfare system took away the opportunity for the poor to practise (Christian) humility, and the rich to practise (Christian) charity, with members of all classes resenting each other in consequence.

Nor did such manipulations of worldly “knowledge” escape public notice: Dickens railed against a caricature of utilitarianism in Hard Times (1854), arguing that actions taken from a place of pure reason could produce nothing but social and individual misery. While his caricature lacked philosophical finesse, it was not far off the mark from how major leaders of industry and government officials were actively distorting such ideas to their own economic advantage. Though originally from the continent, Comte’s work–first translated into English by Harriet Martineau in 1853, but widely known in England prior–thus offered a more coherent and widely-accessible method of making inquiries into the state and needs of the world. As Martineau writes in her preface to The Positive Philosophy of Auguste Comte (1853):

“My strongest inducement to this enterprise [of translation] was my deep conviction of our need of this book in my own country, in a form which renders it accessible to the largest number of intelligent readers. We are living in a remarkable time, when the conflict of opinions renders a firm foundation of knowledge indispensable, not only to our intellectual, moral, and social progress, but to our holding such ground as we have gained from former ages. While our science is split up into arbitrary divisions; while abstract and concrete science are confounded together, and even mixed up with their application to the arts, and with natural history; and while the researches of the scientific world are presented as mere accretions to a heterogeneous mass of facts, there can be no hope of a scientific progress which shall satisfy and benefit those large classes of students whose business it is, not to explore, but to receive. The growth of a scientific taste among the working classes of this country is one of the most striking of the signs of the times. I believe no one can inquire into the mode of life of young men of the middle and operative classes without being struck with the desire that is shown, and the sacrifices that are made, to obtain the means of scientific study. That such a disposition should be baffled … by the desultory character of scientific exposition in England, while such a work as Comte’s was in existence, was not to be borne, if a year or two of humble toil could help, more or less, to supply the need.

In short: Martineau’s translation of Comte’s work offered a philosophical foundation for empirical inquiry that would allow a wider range of persons to evaluate any “facts” put before them about how the world should be, and why, on the basis of how the natural world currently is and the natural laws it summarily follows.

In his later evaluation of Comte’s work, Mill takes particular care to negotiate the metaphoric landscapes that don’t translate well (a word in French, for instance, having a different cultural history than even its closest approximation in English), but he also takes care to note that Comte’s work also addresses how huge paradigm shifts change an entire culture’s consciousness–and how readers in any climate would do well to take similar care not to repeat their predecessors’ ideological errors. In relation to Comte’s second stage, for instance, Mill writes:

In repudiating metaphysics, M. Comte did not interdict himself from analyzing or criticising any of the abstract conceptions of the mind. … What he condemned was the habit of conceiving these mental abstractions as real entities, which could exert power, produce phaenomena, and the enunciation of which could be regarded as a theory or explanation of facts. Men of the present day with difficulty believe that so absurd a notion was ever really entertained, so repugnant is it to the mental habits formed by long and assiduous cultivation of the positive sciences. But those sciences, however widely cultivated, have never formed the basis of intellectual education in any society. It is with philosophy as with religion: men marvel at the absurdity of other people’s tenets, while exactly parallel absurdities remain in their own, and the same man is unaffectedly astonished that words can be mistaken for things, who is treating other words as if they were things every time he opens his mouth to discuss. No one, unless entirely ignorant of the history of thought, will deny that the mistaking of abstractions for realities pervaded speculation all through antiquity and the middle ages. The mistake was generalized and systematized in the famous Ideas of Plato. The Aristotelians carried it on. Essences, quiddities, virtues residing in things, were accepted as a bona fide explanation of phaenomena. Not only abstract qualties, but the concrete names of genera and species, were mistaken for objective existences. … To modern philosophers these fictions are merely the abstract names of the classes of phaenomena which correspond to them; and it is one of the puzzles of philosophy, how mankind, after inventing a set of mere names to keep together certain combinations of ideas or images, could have so far forgotten their own act as to invest these creations of their will with objective reality, and mistake the name of a phaenomenon for its efficient cause.

Mill goes on to point out that this is precisely the point of Comte’s three stages–this metaphysical fixation on abstracts-as-absolutes being an intermediate phase in humanity’s approach to understanding the world, somewhere between using words to invoke notions of a divine will and “the gradual disembodiment of [such] a Fetish”, whereafter words are simply used to identify phenomena with consistent and natural causes that can be understood through empirical inquiry.

In all the works I’m reading today–referencing Martineau’s 1853 translation and Olson’s modern literary criticism while delving into Mill’s and Lewes’ 19th-century British revisions of Comte’s doctrine of positivism–the overwhelming theme is thus one of mental frameworks in flux. On an individual level, we see this in both Comte’s personal life and argumentative double-standards that persist in all eras. Likewise, on a societal level, massive paradigm shifts mark the whole of our written record, while the impact of a given philosophy even within a specific period is by no means culturally stable.

To my mind, a doctoral programme in the humanities tasks its students to live in a similar state of flux: capable of holding a wide range of competing histories and ideas in unrelenting tension. The trick is that, both throughout and at the end of this process, I also need to be able to synthesize these tensions concretely for a wide range of audiences. I haven’t yet mastered this last part, but… I’m getting there, I hope. One bloody book at a time!

Cheers and best wishes to you all.

Conversation Enders: The Problem with Hero-Worship

Working part-time at a local bookstore is a great reprieve from the isolation of my studies. Just as I get to know many customers’ personal lives, so too have many of them learned that I’m a doctoral student working towards her (hopefully) last major proficiency exam. When they ask me what I’m reading that day, I therefore have an opportunity to frame my studies as something useful for a general audience–and sometimes this effort goes well, but at other times the real learning experience is my own.

Two weeks ago, the book of the day was Charles Darwin’s The Descent of Man (1871), a work I’d only read excerpts from in the past. When a customer asked about its relevance, I explained that this was the work in which Darwin–ever tentative about rocking the boat with his research–made explicit that human beings were subject to his theory of evolution by natural selection, too. This book caused tremendous controversy for precisely that reason, though Darwin had gone to great lengths to forestall his comments on human evolutionary behaviours until after extensive (and I mean extensive) review of the physiognomy, general behaviour, and mating pressures among various species of molluscs, fish, insects, birds, quadrupeds, and other primate species first.

Darwin received considerable criticism and ridicule for The Descent of Man (1871), which solidified the ideological “threat” first intimated in On the Origin of Species (1859), by openly integrating human development into the theory of evolution by natural selection.

But The Descent of Man has cultural significance in another capacity, too, so my synopsis for the customer included that this was also the text in which Darwin, every bit a person of his time, corrals his extensive field research on other species to make sweeping comments about the mental inferiority of women, to say nothing about the general inferiority of non-white persons. For instance:

“The chief distinction in the intellectual powers of the two sexes is shewn by man’s attaining to a higher eminence, in whatever he takes up, than can woman—whether requiring deep thought, reason, or imagination, or merely the use of the senses and hands. If two lists were made of the most eminent men and women in poetry, painting, sculpture, music (inclusive both of composition and performance), history, science, and philosophy, with half-a-dozen names under each subject, the two lists would not bear comparison. We may also infer, from the law of the deviation from averages, so well illustrated by Mr. Galton, in his work on ‘Hereditary Genius,’ that if men are capable of a decided pre-eminence over women in many subjects, the average of mental power in man must be above that of woman.”

“It seems at first sight a monstrous supposition that the jet-blackness of the negro should have been gained through sexual selection; but this view is supported by various analogies, and we know that negroes admire their own colour. With mammals, when the sexes differ in colour, the male is often black or much darker than the female; and it depends merely on the form of inheritance whether this or any other tint is transmitted to both sexes or to one alone. The resemblance to a negro in miniature of Pithecia satanas with his jet black skin, white rolling eyeballs, and hair parted on the top of the head, is almost ludicrous.”

I wouldn’t call it “enjoyable” to read such assertions–to encounter work after work (especially ones written from a position of authority, be it scientific, religious, or political) making such petty, ignorant comments at the expense of other human beings–but as a student of literary history, I find neither of these to be shocking or exceptional prejudices. They hurt, granted, but they hurt in largest part because they attest to much broader histories of exclusion and oppression. I do tend to forget, however, that many others have a different relationship with persons of note: a relationship that tends to cushion the individual from their context whenever we like something that individual did. And indeed, the customer who’d first asked about my reading was deeply troubled by my summary. “Darwin said that?” he said. “Darwin believed that?”

I tried to emphasize that Darwin’s comments did not erase his many positive contributions, but the damage was done. To try to offset these uglier aspects of Darwin’s biography, I then blundered further, by pointing out that even prominent early-20th-century suffragists, women who made great strides towards gender equality under the law, still advocated (as a great many did at the time) for eugenics policies–but this only saddened the customer further.

Now, by no means do I consider this customer’s reaction unique, but it was affecting, and I am more familiar with the other side of this flawed argument: people, that is, who will dismiss any significant contribution by a prominent individual because of some perceived failing elsewhere in their biography.

Last year, for instance, while studying for my first major exam, I made the mistake of marvelling at an historical echo: comparing, that is, John Stuart Mill’s succinct moral argument against Christianity (as found in his 1873 Autobiography, describing his childhood move from religion) with the equally succinct moral argument against Christianity used by Christopher Hitchens in more recent debate. Both regarded the notion of vicarious redemption through Christ as morally bankrupt, so the only real difference was that Hitchens could add, through a conservative estimate of the age of our species provided by modern anthropology, the absurdity of believing that a loving god watched “with folded arms” for some 95,000 years before acting to redeem the species, and even then only through barbaric sacrificial rites.

My fundamental point entailed how little had changed in these arguments–how vicarious redemption was an affront to young Mill in the early 19th century just as it was to seasoned Hitchens in the early 21st century–but my colleague interjected by shifting the conversation. This person was incredulous that I would invoke Hitchens at all, with his foreign policy views being what they were–and didn’t I know what kind of uncomfortably antiquated views he once shared about working women and motherhood?

My customer’s implicit tethering of historical significance to modern moral character, as well as my colleague’s dismissal of an argument on the basis of the speaker’s other beliefs, both rely on a fallacious connection between a person’s assertions in a given field, and that person’s actions in another. This isn’t to say that there is never transference between spheres (for instance, a researcher does not lose their knack for researching just by changing the topic of their research) but the existence of such transference still needs to be demonstrated unto itself. (So to carry forward the analogy, if a researcher who’s demonstrated excellence in one field comes out with a book involving another field, but that work lacks proper citation for all major claims therein, we would be safe in assuming that an adequate transfer of pre-existing research skills to new topics had not been demonstrated.)

These troubles of course resonate with that well-known philosophical fallacy, argumentum ad hominem (argument [in reference] to the man [doing the arguing]). But to invoke this fallacy on its own is, I think, to overlook the bigger picture: the powerfully human frustration many of us share with the acts of hero-worship we as individuals and as communities reinforce every day.

One of my favourite examples of this tension lies with Paracelsus, the 16th-century physician who railed against the practice of accepting the truth of a given medical claim based on the prestige of its original author. Instead, he argued that the human body had its own store of healing power, that diseases could be identified by predictable sets of symptoms, and that personal experimentation was thus to be preferred to taking the word of someone, say, in fancy dress, boasting cures made of exotic ingredients, who had simply studied the words of ancient healers in selective institutions of learning.

But as Paracelsus became popular for his resistance to classist medical practices (since the mystification and centralizing of medical “knowledge” only really served the interests of gentleman practitioners), his own ego, in conjunction with an eagerness among many others to defer to perceived authority, meant that, even as he championed self-knowledge, Paracelsus was also quick to declare himself a monarch of medical practice, and so to gain followers in turn.

While Paracelsus’ birth name, P. A. T. Bombast von Hohenheim, is not actually the source of the term “bombastic”, Paracelsus itself means “beyond Celsius” (the Roman physician). Despite Paracelsus’ motto, seen above (alterius non sit qui suus esse potest: let no man be another’s who can be his [own instead]), such self-aggrandizement gained Paracelsus many devotees well after his death.

In essence: Whenever it garners popularity, even resistance to groupthink can generate a sort of groupthink of its own.

The 19th century played its role in glorifying this human tendency, too. Thomas Carlyle’s “Great Man” theory of history–a way of constructing cultural mythology that fixates on narratives of individual virtue and genius–still pervades our thinking so thoroughly that we tend to pluck our “heroes” from their historical and cultural contexts, or otherwise strip them from the fullness of their humanity, in order to exalt specific contributions they might have made. The potential for error here is twofold: 1) in treating any human being as perfect, or approaching perfection, due to the significance of their words and actions; and 2) in condemning entirely the work of any person who, once exalted, is thereafter found to be (shockingly) an imperfect human being.

But therein lies the difficult catch: What if someone else–or a whole community of someone-elses–has already committed the first error? What if you’re born into a culture that already exalts certain human beings as essentially without fault, either by claiming them to be virtuous directly or by downplaying all the problematic aspects of their life stories?

How can we counteract the effect of this first error, save by risking the second?

This is no idle, ivory-tower conundrum, either: Whenever we uphold the merit of an argument through the presumed impeccability of its speaker’s character, we leave ourselves open to losing that argument the first time its speaker’s character ceases to be impeccable. And yet, we cannot allow people to remain in positions of authority whose “imperfections” perpetuate serious social harm, either through word or through act. So what option remains?

More history seems to me the only answer: The more we understand and accept the fallibility of all our most notable figures, the more we can dismantle routines of hero-worship before they ever get so extreme as to require the fallacious distraction of character assassination in the first place.

Now, obviously this kind of work runs at odds with many spiritual beliefs: beliefs in living representatives of a god on earth; beliefs in a human being who is also a god; and beliefs in human beings who claim to have transcended to another plane of existence, be it through yoga, meditation, or drugs. But even most people who would consider themselves spiritual can appreciate the danger of charismatic leader-figures–the present-day godhead of Kim Jong-Un; the Stalins and Pol-Pots and Hitlers of history; the Mansons and the Joneses of smaller, still devastating cults. So there is some common ground from which to begin this conversation-shifting work.

What we now need to put on offer, as a culture, is a way of valuing significant social contributions unto themselves. When we separate those contributions from the maintenance of individual reputations, we only further benefit society by making the process of refining those contributions easier down the line. Likewise, we need to acknowledge figures of note in the most dignified way possible: by not erasing their personhood in the process. When we allow even those who contribute significantly to their communities to continue to be seen as human beings, and therefore ever-in-process, we make the path to positive social contribution seem less unattainable (and hazardous) for others.

Granted, hero-worship is an understandable cultural norm. Many of us want to be inspired by the work of human beings who’ve come before us, and want to imagine ourselves as the potential site of inspiration for others in turn. But whether our hero-worship is fixed on a record-breaking athlete, or a soldier awarded for valour, or a scientist who made a significant breakthrough that will save thousands of lives, or an activist who stood up to oppression in a way that rallied others to their cause, or a community organizer or family member who, in their own, lesser-known way made a terrific impact on our quality of life… hero-worship still sets an untenably high standard for us all.

When that athlete emerges as a perpetrator of rape, or that soldier is found to have tortured prisoners during their tour of duty, or that scientist to have plagiarized prior work, or that activist to have resorted to brutal acts against civilians in their resistance efforts, or that community organizer or family member to have molested children, we are all rightfully devastated. And yet, even then, we tend to get defensive, and our knee-jerk response is often to make excuses for the individual–as if histories of significant action can ever be reduced to stark lists of pros and cons. No, X hours of community service do not excuse the predation of Y children; and no, X impressive rescue missions do not entitle anyone to Y assaults on inmates.

But if we really want to nip such heinous rationalizations in the bud, what we need is a better social narrative for human contributions in general. Here, then, are a few suggestions as to actions we can all take to deflate the culture of hero-worship that muddies the waters of so many critical conversations. If you have others, I welcome their addition in the comments:

1) Practise making biographical assertions without using the rhetoric of relativism, even (or especially) when those biographical notes are ugly. For instance: 1) David Hume held deeply racist views about non-white persons. 2) David Hume’s racist views, and his expression of them in his writings, were commonly accepted in his culture. 3) David Hume’s writings include significant contributions to the history of philosophy. Not “BUT these views were commonly accepted” and not “BUT David Hume’s writings include”. Ask yourself, too, why such rationalizations seemed relevant in the first place.

2) Do not deny your revulsion at the destructive words and actions of your fellow human beings–not even those who have long since passed on. Do ask yourself what destructive behaviours future humans might be equally repulsed by among people of our day and age. How much do our words and actions really differ from those of past figures of note? What is the most effective way to forward a given conversation without recapitulating their errors?

3) If spiritual, put aside notions of divine inspiration when assessing the conduct and argumentation of religious leaders and historical icons. Is their conduct and argumentation impeccable (that is, distinct from the flaws we see in other human beings)? If not, ask yourself what benefit is derived from shielding these flaws under notions of divine sanction. And what are the risks?

4) If not spiritual, consider a prominent figure you find yourself defending the most in conversation. Are you defending the validity of the person’s arguments, or the person’s character (with the implication that by defending the person’s character you’re still defending the legitimacy of their arguments)? If the latter, why, and to what end? How does this forward meaningful discourse?

Hero-worship starts early, and our media culture is exceptionally good at building people past and present up to untenable standards of excellence. Once there, we often defend the reputations of these “Great People” so zealously that we limit our ability to build upon their greatest contributions, or else bind their characters and their contributions so tightly together that when the former falls, so too, in the public eye, does the relevance of the latter.

If any single, pithy adage could thus sum up the quality of discourse possible in such a culture, it might read: “Great minds discuss ideas; average minds discuss events; small minds discuss people.” Eleanor Roosevelt’s name is most often associated with this assertion, but it wouldn’t matter one whit to the quality of this statement if someone else had said it first.

…Which is a relief, because the saying has a far older, most likely anonymous provenance. So without denying the many difficult and outright ugly histories that surround our achievements, I have to ask: How many of our best works might be easier to build upon or amend if we could just get past the celebrity-status (for better or worse) of any human beings therein involved?

On Being Human: Johansson in a Tale of Ash and Mist

Under the Skin
Jonathan Glazer
Mongrel Media / Film4 & BFI

Our fear of the unknown forms the emotional basis for many films, and our fear of the unknown in ourselves, even more. Rare is the film, though, that explores both without offering facile resolutions, and that makes constrained and intelligent use of all filmic elements to uphold its central themes.

Under the Skin lives up to its title–getting under its viewers’ skins in many ways–by being just such a rarity. The concept is fairly straightforward: Scarlett Johansson plays an alien in a human skin, whose central task seems to be the seduction of adult men, who are thereafter ensnared in an ultimately fatal process of absorption. But nothing about this situation fits tidily with notions of what it means to be, and to not be, a human being.

For one, our protagonist is no 1950s femme-fatale, delighting in the ruination of men, but rather a methodical worker (under the watch of mysterious non-human men on motorcycles) whose human mannerisms are at first only present so long as a potential target is in sight. And even the choice of target proves unsettling in its reversal of cultural stereotypes; from within her creepy white van we’re made to view isolated men of all ages as potential victims–even though, at first, we’re led to believe that our alien never takes anyone by force; and though she certainly never shows superhuman strength.

Moreover, while our alien hones her skill at this task, her indifference to everything else around her is surely meant to provoke the audience, to prompt viewers to plead with her: Be more human. Please, please, please, be more human. But even the most wrenching acts of indifference are turned, in the end, against us and our supposed humanity. In the second act, for instance, our alien picks up an isolated man and turns her now well-honed charm on him–only this time the man (played by Adam Pearson) has a severe facial deformity (neurofibromatosis), which changes entirely the significance of being treated as an object of desire by a more normatively beautiful human being.

Any viewer at this point should rightly feel uncomfortable about the tension at work on screen: If our alien treated him with the same revulsion over his external appearance that other human beings do, he wouldn’t be at risk of the same fate at her hands. But in treating him as though his external appearance should have no bearing on his fundamental worth, our alien performs a level of human equality actual human beings routinely struggle to approach.

The film revisits this tension between the tender and the brutal in our “humanity” at later, even more critical junctures, but this tension would not be possible if two things were not true throughout: For one, alien though our protagonist is, she can and does experience her environment–its sights and sounds and multitude of idiosyncrasies. Without this capacity to be impacted by her time among us, the plot could not significantly advance. And yet (for another), even as our alien shares with us this one certain trait, we have to be confronted time and again with opportunities for her to act more human in consequence (and for us to hope she acts more human in consequence) and to have each and every opportunity snatched from us in the end.

To achieve and maintain this relentless tension, Under the Skin needed to be both aesthetically striking and subtle–and it was. At one juncture, when our alien is driving through the streets of Scotland, the soundtrack takes on the breathy character of someone in a spacesuit–because, in a way, what’s mundane to us is space exploration for her. Intimations of alien physiology are likewise dealt with in delicate asides–through passing comments about heat and cold, and our alien’s attention to the smallest of insects around her–while a sequence of ash and mist becoming indistinguishable before the camera is a classically understated visual affirmation of the film’s central theme.

The sequence for male ensnarement, riffed upon three times throughout the movie, is a similarly minimalist affair that only makes the horror of the whole situation all the more startling, and grotesque. “I’m dreaming,” says the last potential victim, and when our alien agrees I realized that this whole visual metaphor was just that: the closest the human mind could come to describing an entirely alien process of predation and destruction.

The vocabulary of cinematic art being what it is, I can thus understand why certain elements (especially one dance scene) might be considered “Lynchian”, or the film compared in other ways to classics like Altered States and 2001: A Space Odyssey. But such comparisons amount to little more than irrelevant shorthand; the film more than stands on aesthetic merits all its own. In the thematic tradition of all great science fiction, Under the Skin is a slow-building exploration of our fidelity to certain notions of what “being human” means. You’ll find no easy answers here: just one hell of a nerve-wracking chance to sit with, and perhaps confront, the alien within.

A Quick Excerpt from the Morning’s Readings

Nothing like a bit of 19th-century libertarianism to start the day!

This is from The Bridgewater Treatise of chemist and geologist John Kidd, published in 1833. At this time, as I mentioned in my last essay, social prescription is very much bound up in notions of “natural” law, which of course carries the implication of divine sanction. Kidd in particular is writing about why the “poor laws” go against the law of nature because they compel the rich to pay for certain institutions of “relief”. Sound familiar?

(Warning: 19th-century writing is often dense and the syntactic rules sometimes differ, especially when it comes to the liberal use of commas.)

In the mind of the pauper, with all his challenging and all his boisterousness, there is still the latent impression, that, after all, there is a certain want of firmness about his plea. He is not altogether sure of the ground upon which he is standing; and, in spite of all that law has done to pervert his imagination, the possessory right of those against whom he prefers his demand, stares him in the face, and disturbs him not a little of that confidence wherewith a man represents and urges the demands of unquestionable justice. In spite of himself, he cannot avoid having somewhat the look and the consciousness of a poacher. And so the effect of England’s most unfortunate blunder, has been, to alienate on the one hand her rich from her poor; and on the other to debase into the very spirit and sordidness of beggary, a large and ever-increasing mass of her population. There is but one way, we can never cease to affirm, by which this grievous distemper of the body politic can be removed. And that is, by causing the law of property to harmonize with the strong and universal instincts of nature in regard to it; by making the possessory right to be at least as inviolable as the common sense of mankind would make it; and as to the poor, by utterly recalling the blunder that England made, when she turned into a matter of legal constraint, that which should ever be a matter of love and liberty, and when she aggravated tenfold the dependence and misery of the lower classes, by divorcing the cause of humanity from the willing generosities, the spontaneous and unforced sympathies of our nature.

See? Nothing changes: We have to deal with the same rhetoric today–the naive notion that if the government would just butt out, instead of compelling people of means to support a social safety net for all through taxation, charity of a personal nature would easily reassert itself to fill the gap, and the world would be in better balance.

The very next year, on the back of such grievances, the “New Poor Laws” would come into effect. Especially in their earliest conception, these would be harsh measures, essentially further penalizing the poor for being poor. You might be familiar with the outcome of these laws from such works as Dickens’ Oliver Twist and A Christmas Carol; the workhouses he describes in each were first established under this 1834 legislation.

Back to a long day of reading for me!

Greetings to the Swarm!

Hi folks!

I’ve seen a huge spike in readership on this little blog in the last few hours, thanks to the very kind and unexpected promotion by Dr. Jerry Coyne of an essay I wrote in response to a Slate.com review by Michael Robbins. Huge surges come with consequences, though, so I’d just like to make a few quick points.

1) Welcome! Thanks so much for adding my blog, and I hope some of my future posts will prove as interesting to you as this essay clearly did. I’m certainly looking forward to writing more posts about my readings now that I’ve had some success translating the relevance of 19th-century literature for a 21st-century audience–so thanks for that boost in confidence!

2) As I mentioned in my original comment, feedback is very much welcome. I’m a second-year doctoral student of English literature at Wilfrid Laurier University, studying styles of science writing in the nineteenth-century. This means that I am a scholar in process, and with any luck I’ll remain a scholar in process throughout my life. I am therefore not presenting myself as a definitive authority; just as a life-long learner interested in promoting a conversation that involves the fruits of my research to date. I will make mistakes, and I will hopefully be in a position to own up to those mistakes so as not to derail the conversation.

3) More to the point, I will always strive to maintain a courteous tone in conversation, and ask that commenters here do the same. As a student of English literature, language clearly matters to me, but that doesn’t mean I’m not going to make mistakes. To this end, I welcome all comments about rhetoric and vernacular (as well as content, of course) that are not forwarded in bad faith. In turn, I’ll try to signal any amendments I might make with clear “EDIT” markers in the original text.

4) That said, I’m in the middle of an intense reading list going into my final proficiency exam, so I cannot engage in much online discussion for the next while. I will try to post my readings and reflections here more often in the coming weeks, but I’ll have to beg patience if my participation in comment threads is inconsistent until after the end of August.

5) Again, welcome! A huge surge in reading numbers can be a little terrifying, but I look forward to many fruitful exchanges in the months to come.

Cheers and best wishes to you all.

Enough Already: The Anti-Atheist Article Shows Its Age

Michael Robbins, writing for Slate Magazine, recently contributed to that most robust literary genre, the anti-atheist op-ed, with a review of Nick Spencer’s Atheists: The Origin of the Species. “Review” might even be too strong a term for this piece; though the book is touched upon, its formal introduction is buried, and assessments of the text itself are routinely subordinated to Robbins’ own views on science and religion.

To this end, Robbins draws from what we’re sometimes left to assume are Spencer’s arguments, as well as standoffs with atheists (including those from that great bastion of critical discourse, the online comment thread), to make a broader set of claims: that today’s popular atheists are out of touch with the history of atheism; that these atheists just don’t “get” the point of religion, which is clearly all about metaphor, not explanatory power; and that if they truly “got” the history of atheism, modern atheists would understand that any secular morality is only an fragmented descendant of religious morality. For Robbins, then, Nietzsche is the ideal atheist–an atheist who felt that a world without a god is horrifying, and deserves to be mourned.

Such articles always have their easy cannon-fodder, with the likes of Dawkins or Hitchens thrown in as de facto examples of what Robbins terms “evangelical atheism” and others have termed “militant atheism”. These terms almost never appear with any sort of textual evidence (for instance, in what way “evangelical”–knocking on doors to spread the good word of atheism? and in what way “militant”–agitating for the persecution of believers?), and so serve as little more than caricatures in an already highly-caricatured debate.

Other terms in Robbins’ article are likewise, predictably heated, with “Dawkins and his ilk” identified as the “intellectually lazy” successors to Spencer’s history. This generic flogging of Dawkins should be a warning for anyone seeking insightful commentary about science and religion; it only signals for the reader that this piece is not going to concern itself so much with ideas as with the people who forward them. Robbins even ends his article with a quote lamenting the lack of such ideas-based discourse–“Everyone is talking past each other and no one seems to be elevating the conversation to where it could and should be”–without expressing any self-awareness as to how rhetoric like his keeps this conversation off-point.

And yet, such rhetoric is by no means novel: I haven’t read Spencer’s book, but Robbins references people being termed “atheist” who would more likely be regarded as theists today. He doesn’t go into great length on this point, but if his source text is indeed a good summary of the history of atheism, it should address instances throughout that history when the term was used as an insult or threat, either to demarcate people whose beliefs differed from the status quo, or to identify those who held their disbelief too strongly.

In an 1874 essay, for instance, Thomas H. Huxley notes that 17th-century Descartes–who worked extensively to rationalize the existence of a god–had been considered an atheist by the Jesuit community. Meanwhile, Huxley himself, though best known today as “Darwin’s bulldog” and a strong advocate against religious interference in scientific progress, did not identify as an atheist; rather, he sneered at those who took up that term as being too sure of themselves–just like folks who rail against the tenor of “new” atheism today.

By introducing Huxley to this discussion, I should surprise no one by adding that I write my concerns about these relentless anti-atheist pieces as a doctoral student of nineteenth-century science writing. It is from this same critical focus, as well as from my position as a human being with eyes and ears in the world, that I take issue with many of the arguments treated as self-evident truths in articles like Robbins’. Even putting aside the obvious strawman tactics, Robbins’ central arguments, drawn in part from Spencer’s text, just don’t hold historical water. I cannot comment on Spencer’s original framing, since I’m receiving his text through a powerful filter, but Robbins’ arguments are slippery enough as is. He writes near the outset:

Spencer’s point, of course, is that this received wisdom is naive nonsense—it gets the history of science and the nature of religious belief wrong, setting up an opposition between reason and faith that the church fathers would have found rather puzzling. … Few historians take this myth seriously, but it retains its hold on the vulgar atheist imagination. To believe it requires the misconception that religion exists primarily to provide explanations of natural phenomena.

Yes, the early church fathers believed that “reason” was that which brought you closer to god; that there could be no reason without god, such that the idea of dividing the two was incoherent. We have to remember that Aristotlean logic in particular dominated the Western world for almost 2000 years before other modes of evaluation gained a significant foothold; in this system an argument could be structurally valid, but its soundness still relied on the accuracy of its premises–and there were a heck of a lot “common-sense” premises in that era that we know today are not phenomenologically accurate.

(Robbins even cites one such common-sense premise later, when he presents the idea of a “universe from nothing” as a concept intrinsically meriting contemplation. Despite himself arguing that “since the very beginnings of Christianity, Basil, John Chrysostom, Gregory of Nyssa, Augustine … all assumed that God’s creation was eternal, not something that unfolded in six days or any other temporal frame”, he does not consider the possibility that a “universe from nothing” might thus be nonsensical. Certainly, recent research attests to even the “vacuum” of space being occupied, such that there’s no evidence we ever sprang up from true philosophical “nothingness” in the first place, but it’s just striking to note how pervasive such “common-sense” incoherencies remain today.)

Suffice it to say, then: Yes, “reason” today is a much more secular term, involving fewer a priori assumptions than its precursor. But what Robbins really overlooks is that this shift in meaning was a difficult transition, precisely because the Judeo-Christian god was expected to have explanatory power for natural phenomena. Medieval texts in particular are rife with this thinking–our very world a direct, macrocosmic extension of the human microcosm, with the stars overhead placed there so that we might read our destinies in them.

But at the turn of the 19th century, the earnest pursuit of natural theology–that is, the practice of evidencing the Judeo-Christian god through such self-centred studies of nature–started to lose its footing. Though late-18th and early-19th-century geologists and astronomers were careful for decades not to present their findings in such a way as would stir up public controversy about the overwhelming divergence of empirical data from Biblical record, the accumulation of so many dissenting data points could not be ignored forever.

Natural theology didn’t go down without a fight, though: At its 19th-century height, a series of (originally) eight treatises–The Bridgewater Treatises, written by gentlemen of considerable standing in science communities–were issued in the 1830s to assert the persisting explanatory power of the Bible in relation to the natural world. Nor was this whole push happening on the margins of socio-religious discourse; though Rev. Robert Chambers’ The Vestiges of the Natural History of Creation (1844) offended many clergymen for its deviance from Biblical record to account for new geological data (and early scientists for its sloppy, credulous, lay-person reporting on findings from other fields), it remained a popular work throughout the century, undergoing a dozen editions and attaining even a royal audience. The book makes appeals for the existence of a god despite all apparent evidence of absence in the natural world, but Chambers’ is a much-diminished godhead, a Deistic omniscience “behind the screen of nature”, who exists and acts in pointed contrast to the personally-involved creator believed in by so many of the day.

I should emphasize that this was all going on prior to Darwin’s On the Origin of Species (1859) and The Descent of Man (1871), the latter of which caused trouble by spelling out that, yes, evolutionary theory really did apply to human beings, too! For a while, the old age of the earth in relation to Biblical narrative could be accounted for by there being multiple periods of flood and upheaval, with the Biblical flood being just the last of this series. However, even the universality of that flood was falling apart under empirical scrutiny, and this had serious theological implications for the story of Adam and Eve–the critical touchstone on which all notions of redemption through Christ were based. If all the floods were regional, did the sin of Adam and Eve only touch a particular lineage? And later, when the theory of evolution came into the picture, when did Adam and Eve sin in this gradual progression of species?

Robbins goes on to assert that a definition of religion “must surely involve reference to a particular way of life, practices oriented toward a conception of how one should live” but then disdains the claim that religion “is a scientific theory,” “a competing explanation for facts about the universe and life” (quoting Dawkins). Indeed, Robbins sums up his opinion on that view as follows: “This is—if you’ll forgive my theological jargon—bullshit.”

It’s not simply that Robbins (and others like him; his article is merely representative of many more) is inaccurate when he presents Christianity as he then does, as an allegorical exercise throughout the ages, absent real-world interaction with empirical input. Rather, in doing so he also erases a powerful history of human struggle–among theists and atheists alike. Reading 19th-century texts as an atheist myself, I’ve always been struck by how difficult understanding one’s moral duty becomes when known facts about the natural world change so dramatically, and when the question of what your god wants of you becomes so convoluted in consequence.

For instance: 19th-century England being a hotbed of poverty and disease, works of fiction, religious pamphlets, and opinion pieces were at odds over whether state reforms to improve the lot of the most vulnerable went with or against the Christian god’s plans. Since the natural world was so full of suffering, but nothing happened that the Christian god did not will into being, maybe suffering and income disparity were meant to exist–to give some humans a chance to practise humility, and others to offer charity?

Alternately, was this god showing his condemnation of industrialized England? Was this god awaiting human action to repair what humanity had wrought? Was that the reason for the rampant spread of disease, and the difficulties disposing of so much waste? But if so, why were so many innocent children suffered to be born into this system, when their isolated circumstances rarely gave them a chance to accept Christ before their untimely deaths?

And what about all those new animals being shipped to England from all over the world–more species than could ever have fit on the Ark? What did we owe our fellow species if we were all actually part of one long chain of being? Was vivisection unjust, or were we still the divinely-chosen stewards of the Earth, as Genesis suggested, entitled to do with the rest of the world as we wished?

While Robbins insists on this separation between religion as “a conception of how one should live” and any conception of the world drawn from empirical evidence, he also argues that, since religious moral traditions have historically preceded secular moral traditions, movements like humanism are simply degenerate versions of religious belief (drawing here, in part, from political theorist and atheist John Gray). This puts Robbins in murky territory around the notion of moral education–but thankfully, this murk is easily cleared up when we move past the false dichotomy of science and religion being the only possible answers as to morality’s origins.

Simply put, even just looking at the range of Christian cultures in existence today, we see moral divergence: Some support the death penalty; some do not. Some have marriage equality; some tacitly or even overtly sanction the incarceration, torture, and murder of gay persons. Some believe in equality between the sexes; some believe in strict gender roles. Some believe in genital mutilation for children of both sexes; others for children of one; others for children of none. Some believe in using the state to provide a basic social safety net to help everyone in times of distress; some believe that charity should start–and end–in independent efforts through the church and at home.

But I don’t doubt that I could ask Christians in each and every one of these cultures where they get their sense of how one should live, and receive the same answer: “From the Bible.” Or possibly: “From Jesus.”

Similarly, evidence about the natural world is only as good as the culture that receives it. For some, the idea that certain brutal acts are present in nature is enough to suggest that we should sanction those acts in human societies; for others the brutality of nature is as good a self-serving incentive as any to build a better, safer community for all. For others still (perhaps those more attuned to suffering around them), the way the world is should simply never preclude us from trying to shape it otherwise.

Which brings us to the shape of our culture–this digital, Anglocentric, North American community in which we see time and again the popularity of articles like Robbins': anti-atheist rhetoric by an author who nevertheless claims to want a more thoughtful discussion, a discussion in which atheists and theists are speaking directly to one another instead of over each other’s heads. But in a review that centrally castigates a caricature of modern atheism on a poorly-evidenced charge of historical ignorance, Robbins instead evades important histories of his own: histories of thoughtful theists, learned and layman alike, who over the last two millennia looked to the natural world assuming it carried literal Biblical histories both within and upon it.

Robbins and similar religious writers try to chalk up such theists to mere fundamentalists, and accuse atheists of targeting the “low-hanging fruit” of Biblical incoherence and Creationist nonsense instead of tackling “sophisticated” arguments like David Bentley Hart’s, which involves a “ground-of-all-being” god-concept: ineffable, Deistic, (still male), yet somehow of personal relevance when contemplating how best to live. But for all these attempts to place the god debate outside the world we all live in, the great bulk of Judeo-Christian history still lies with those theists who believed in a personal, present, and active creator as described in the Bible, even as both the natural world and weight of social history revealed less and less synchronicity with Biblical descriptions and prescriptions over time.

Diminishing the reality and diversity of such Biblical adherents–and thus dismissing consequent atheist concerns about how to build a better society when people still believe in this sort of god when making political and personal decisions–isn’t even “talking past each other”; it’s denying the full and profoundly human range of voices at the table. Surely we’re capable of more.