Disingenuous Critique: John Gray’s Review of Richard Dawkins

I almost wonder why I’m bothering to write a response to John Gray’s New Republic review of Richard Dawkins’ An Appetite for Wonder: The Makings of a Scientist. I am a strong opponent of hero-worship, I have not read Dawkins’ autobiography myself, and even without reading it, I can already anticipate many facets of Dawkins’ fairly affluent, colonial upbringing that probably go less examined and deconstructed than would satisfy a modern, more culturally self-reflective audience.

However, for a philosopher, Gray so sorely abuses the rhetoric of his discipline to perpetuate a long-standing attack on Dawkins–and moreover, does so on the back of some truly distorted representations of 19th-century thought–that I feel I would be remiss, as one who studies the non-empirical uses of rhetoric in 19th-century science writing, to keep mum on the review as a whole. This is absolutely not to say that any critique of Dawkins is inherently wrong; I simply wish to see prominent public thinkers use the best possible argumentation when they set about dismissing other claims.

To this end, my issues with Gray’s wholly inappropriate approach (and again, he is a seasoned political philosopher; there is no excuse in ignorance to be found here) begin almost at the outset, when he cites a portion of Dawkins’ The Selfish Gene (1976) and then applies a form of psychoanalysis more in keeping with humanities scholarship of the same academic era:

Intelligent life on a planet comes of an age when it first works out the reason for its own existence. If superior creatures from space ever visit earth, the first question they will ask, in order to assess the level of our civilisation, is: “Have they discovered evolution yet?” Living organisms had existed on earth, without ever knowing why, for over three thousand million years before the truth finally dawned on one of them. His name was Charles Darwin.

Several of the traits that Dawkins displays in his campaign against religion are on show here. There is his equation of superiority with cleverness: the visiting aliens are more advanced creatures than humans because they are smarter and know more than humans do. The theory of evolution by natural selection is treated not as a fallible theory—the best account we have so far of how life emerged and developed—but as an unalterable truth, which has been revealed to a single individual of transcendent genius. There cannot be much doubt that Dawkins sees himself as a Darwin-like figure, propagating the revelation that came to the Victorian naturalist.

First, Gray pathologizes as distinctly Dawkinsian an exceedingly popular argumentative trope (“If aliens came to Earth…”) that always presupposes the observing aliens to be a more intelligent form of life. From here, Gray calls out the myth of individual genius–which is absolutely a dangerous and inaccurate concept, but also one that permeates the great bulk of Western ideology, whereby individuals–of industry, of political office, of activism, of science and technology, of scholarship–are routinely celebrated to the exclusion of the communities and cultures that fostered their growth and contributed to their greatest achievements.

But the reason for treating these conventions as exceptional and bizarre in Dawkins is made clear in that last line: “There cannot be much doubt that Dawkins sees himself as a Darwin-like figure…” Really? Plainly asserting a paucity of doubt does not make it so. The only argument being made here is by the most tenuous of extrapolations: If Dawkins does indeed see Darwin as a “single individual of transcendent genius”, how does it intrinsically follow that by “propagating [Darwin's] revelation” (that is, not creating any new revelation himself) he fashions himself as another Darwin?

To put it another way, billions of people regard Jesus Christ, Muhammad, Buddha, the Dalai Lama, or heck, Ayn Rand or Deepak Chopra as singularly transcendent individuals. Without denying the likelihood that some people who buy into this myth of individual genius might also consider themselves transcendent individuals on par with their idols, there is absolutely nothing intrinsic about this connection. The great majority–as should be expected in a species so dangerously prone to groupthink and obedience to perceived authority–idolize from afar.

Gray goes on to talk about the way Dawkins seems to treat his upbringing in British-occupied Malawi; the comfortable colonialism is, again, not surprising, but the insinuations Gray draws from moments of self-reflection are. To this end, Gray writes on a boarding school incident as follows:

Today, Dawkins is baffled by the fact that he didn’t feel sympathy for the boy. “I don’t recall feeling even secret pity for the victim of the bullying,” he writes. Dawkins’s bafflement at his lack of empathy suggests a deficiency in self-knowledge. As anyone who reads his sermons against religion can attest, his attitude towards believers is one of bullying and contempt reminiscent of the attitude of some of the more obtuse colonial missionaries towards those they aimed to convert.

I am trying to avoid the language of incredulity, but as an honest reviewer of an autobiographical text, what exactly is Gray expecting here–that Dawkins interrupt his chronological narrative to reflect on the possible similarities between his boyhood indifference to the physical abuse inflicted on a small child in front of him, and his articulated opposition to religion, as espoused in spheres of adult debate?

But no–it’s not even that simple, because Gray shifts the goalposts even within his thought: starting with the abused English boy and ending by likening adult-Dawkins to “some of the more obtuse colonial missionaries” by engaging in public debate. Remember that Gray’s indictment here is a “deficiency in self-knowledge”, but the standard for “self-knowledge” he sets here is nothing less than Dawkins moving from the case of this abused little boy to a recanting of his public debates against religion as a kind of bullying emblematic of the entire British colonial system. For someone who extols the importance of nuance and gradation when assessing human behaviours, Gray is highly selective about the behaviours given a pass.

Of course, Gray’s particular biases come to the fore soon enough; an atheist himself, he makes his case for religious value from very selective constructions of faith. In a paragraph acknowledging that Dawkins’ atheism emerged amid a groundswell of the same, Gray writes:

If there is anything remarkable in his adolescent rebellion, it is that he has remained stuck in it. At no point has Dawkins thrown off his Christian inheritance. Instead, emptying the faith he was taught of its transcendental content, he became a neo-Christian evangelist. A more inquiring mind would have noticed at some point that religion comes in a great many varieties, with belief in a creator god figuring in only a few of the world’s faiths and most having no interest in proselytizing. It is only against the background of a certain kind of monotheism that Dawkins’s evangelical atheism makes any sense.

Did you catch that reconstruction of the world’s religious backdrop? “[A] creator god figuring in only a few of the world’s faiths” is a curious way to present the overwhelming religious dominance of the Abrahamic faiths (Judaism, Christianity, Islam) and their pressing, ongoing influence on sociopolitical affairs in the Western world. Moreover, Gray is blatantly dishonest when he writes “A more inquiring mind would have noticed at some point that religion comes in a great many varieties”: the previous paragraph quotes Dawkins as learning from his mother “that Christianity was one of many religions and they contradicted each other. They couldn’t all be right, so why believe the one in which, by sheer accident of birth, I happened to be brought up?” This was the conversation, according to Dawkins, that started him on the road to atheism.

Gray then dissents from Dawkins on the Pauline notion of original sin, and further makes the oft-used, inaccurate claim that Biblical literalism is more the stuff of “[c]oarse and tendentious atheists of the Dawkins variety” than Christian history, citing Augustine’s interrogations of what the words in Genesis might mean to imply that he didn’t, say, believe in a very young Earth. I went into the flaws of this rhetoric in my criticism of the Slate.com article by Michael Robbins, but I will reassert here that allegorical interpretations were absolutely not made exclusive of literal interpretations by early Christian figures–and why would they be? It is no great mark against early Christian figures to say that they operated as well they could with the knowledge they had on hand. Until the concepts of deep time and deep space gained public purchase in the 19th century, it’s only natural that historical accommodation would be made for the events spelled out in Genesis.

On this accord, then, Gray would be well-served in reading contemporaneous reviews on and related cultural responses to both On the Origin of Species (1859) and The Descent of Man (1871) (the latter of which explicitly tethers humankind to evolutionary theory). Gray writes “When he maintains that Darwin’s account of evolution displaced the biblical story, Dawkins is assuming that both are explanatory theories—one primitive and erroneous, the other more advanced and literally true.” Well, yes: Taking into account the extreme hostility of most initial reviews, particularly from Christian sources, and the fact that 11,000 Anglican clergymen signed a declaration in 1860 that the Bible must be taken literally, and the banning of On the Origin of Species in Trinity College, Cambridge, if Dawkins is “assuming that both are explanatory theories”, then he keeps good company with the contemporaneous readers of Darwin who believed the same.

Another point of carelessness regarding 19th-century thought emerges after Gray criticizes Dawkins for not demonstrating a nuanced understanding and evaluation of different philosophies of science: “empiricism”, “irrealism”, and “pragmatism”. In the very next paragraph, Gray introduces “positivism” in the 19th century, as if it were a singular school of thought, and not in actuality a range of philosophical responses to (among other things) Hegelian negativism. When he returns to positivism in a later paragraph, Gray shows just how poorly he understands this discourse when he writes:

More intelligent than their latter-day disciple, the positivists tried to found a new religion of humanity—especially August Comte (1798–1857), who established a secular church in Paris that for a time found converts in many other parts of the world. The new religion was an absurdity, with rituals being practiced that were based on the pseudo-science of phrenology—but at least the positivists understood that atheism cannot banish human needs that only faith can meet.

No. Comte tried to found a new, humanist religion in the latter half of his intellectual career, a fact which drew criticism from other thinkers developing philosophies of science in the positivist vein. While public thinkers like William Whewell, John Stuart Mill, and G. H. Lewes all wrote works negotiating positivist points of view, their negotiations of Comte’s religiosity was secondary to their negotiations of his earlier work, and even then, the ensuing schools of thought differed widely. In the introduction to his Illustrations of Universal Progress (1864), Herbert Spencer articulates this division plainly when he writes:

But it is not true that the holders of this doctrine and followers of this method are disciples of M. Comte. Neither their methods of inquiry nor their views concerning human knowledge in its nature and limits are appreciably different from what they were before. If they are Positivists it is in the sense that all men of science have been more or less consistently Positivists; and the applicability of M. Comte’s title to them no more makes them his disciples than does its applicability to the men who lived and died before M. Comte wrote, make them his disciples.

My own attitude toward M. Comte and his partial adherents has been all along that of antagonism. … I deny his Hierarchy of the Sciences. I regard his division of intellectual progress into three phases, theological, metaphysical, and positive, as superficial. I reject utterly his Religion of Humanity. And his ideal of society I hold in detestation. … The only influence on my own course of thought which I can trace to M. Comte’s writings, is the influence that results from meeting with antagonistic opinions definitely expressed.

I also call attention to this inaccuracy of Gray’s because it exists so blatantly in service of ad hominem argument. Can any other purpose be divined from the construction, “More intelligent than their latter-day disciple, the positivists”, than to suggest that Dawkins is a fool who would do well to be more like his amicable, unified, humbler positivist forefathers?

This idealized version of Victorian discourse emerges elsewhere in Gray’s review, and I for one am I tired of public thinkers deciding that the appropriate “tone” for intellectual debate today should be based on such glosses of our past. For someone who wants Dawkins to be more self-reflective in relation to the influence of British colonialism on modern rhetorical practices, Gray also ignores how discussions about science in the 19th-century were routinely used to reinforce cultural notions of ethnic, geographical, and sex-based superiority: By no means are these the “glory days” of such debate. And then there are more basic issues with his history:

Unlike most of those who debated then, Dawkins knows practically nothing of the philosophy of science, still less about theology or the history of religion. From his point of view, he has no need to know. He can deduce everything he wants to say from first principles. Religion is a type of supernatural belief, which is irrational, and we will all be better off without it: for all its paraphernalia of evolution and memes, this is the sum total of Dawkins’s argument for atheism. His attack on religion has a crudity that would make a militant Victorian unbeliever such as T.H. Huxley—described by his contemporaries as “Darwin’s bulldog” because he was so fierce in his defense of evolution—blush scarlet with embarrassment.

As someone who has read the major attempts to describe a philosophy of science in the 19th century, let me just go right ahead and say that 19th-century thinkers had just as much difficulty articulating and understanding these ideas; a rational system for scientific inquiry was a concept constructed, not present in anything resembling a finished state, throughout the era. The closest construction might be John Stuart Mill’s A System of Logic (1843), which was well-received, but the latter half of the century still had scientists conflating the limits and nature of deductive and inductive thought.

Also, it is patently dishonest to describe T. H. Huxley as a “militant Victorian unbeliever”; Gray’s sentence structure relies on a modern reader directly aligning believer-in-evolution with atheist, but Huxley was adamantly agnostic Deist, counting himself “not among atheists, for the problem of the ultimate cause of existence is one which seems to me to be hopelessly out of reach of my poor powers” and going on to write that, “[o]f all the senseless babble I have ever had occasion to read, the demonstrations of these philosophers who undertake to tell us about the nature of God would be the worst, if they were not surpassed by the still greater absurdities of the philosophers who try to prove that there is no God.”

Surely, though, these quotes mean that, even if Huxley isn’t a “militant … unbeliever” he would still “blush scarlet with embarrassment”–but if Gray wants to condemn Dawkins for adopting what Gray terms a “missionary” speaking style, he can choose no worse counterpoint than Huxley, a self-admitted sermonizer (ribbed often by Spencer for the same “clerical affinities”), who makes a fond anecdote in “Autobiography” of “preaching to my mother’s maids in the kitchen as nearly as possible in Sir Herbert’s manner one Sunday morning when the rest of [his] family were at church.”

Ultimately, Gray returns to his opening gambit–drawing the most tenuous of connections between Dawkins’ text and Gray’s own store of opinions on the man. Responding to Dawkins’ lament that Darwin never attained the rank of “Sir”, and what this implies about the British honours system, Gray writes, “It is hard to resist the thought that the public recognition that in Britain is conferred by a knighthood is Dawkins’s secret dream.”

Is it hard to resist, though? Or do sentences like these, with all they advance in the way of poor argumentation by a public thinker who makes a point of upbraiding others for their lack of philosophical rigour, say far more about John Gray than Richard Dawkins–and even, more critically, than the work of autobiography ostensibly up for review?

Cronenberg’s Big Mouthful


Consumed
David Cronenberg
Scribner

While reading the first novel by David Cronenberg, acclaimed director of over a dozen unsettling films, I asked myself all the obvious questions: Why did Cronenberg think print the right form for this story? Could it have been a film? These might seem like biased inquiries, but since Cronenberg brings his long career in cinema to bear on the promotion of this work, it seems fair to reflect on why he jumped mediums, and whether that jump worked.

Ultimately, I had to concede that a book was the right vehicle for this story: The text makes arguments I doubt would translate well to the big screen, and offers a level of technology-worship that works best when written out in loving detail. However, as with many first novels, the very concepts that make this a difficult piece to film bloat the final third with relentless exposition. Put simply, Consumed is not just a novel; for better and for worse, this thriller aspires to philosophical statement as well.

As with so many of Cronenberg’s films, Consumed is also a visceral and brutal text, making extensive use of sex and the grotesque in conjunction with other ideas of consumption–not least of which being media-related. To this end, our protagonists are two intimate photojournalists on ostensibly different freelance paths: Naomi is investigating a shocking French scandal–one member of a famed philosophical couple found dead and partly eaten; her partner, Aristide Arosteguy, missing and presumed guilty. Meanwhile, Nathan first follows a medical practitioner performing under-the-table operations on the dying and dysphoric, then stumbles upon an even more striking subject after tracking down the originator of a contracted STI.

These plot-lines eventually dovetail (as does, unfortunately, the STI), but while cannibalism remains a core concept throughout, Cronenberg is clearly thinking about this subject more figuratively, with fellatio and self-mutilation (though very much present) giving way to deeper questions about the loss of self. For instance, is there an even more grotesque way in which we can be (and are) consumed? And if we knew what it was, could we ever escape that form of consumption? The book reveals a way in which two characters certainly try, but ends on an ambivalent note.

The journey, on the other hand, offers its own imaginative pleasures. Granted, you either have to love camera technology or appreciate Cronenberg’s love of technology to get through huge, descriptive swaths of this otherwise-lean book, but this fixation extends to the unsettling, near-future potential of other gadgets, like the bio-3D-printer and neuroprosthetics, so there is variety, on the whole.

Consumed is also filled with the sort of strong, definitive sentences one expects in a thriller. For instance, in describing the cannibalized philosopher (as she looked prior to death), Cronenberg writes: “A sixty-two-year-old woman, Célestine, but the European version of sixty-two, not the Midwestern American mall version.” And though the term “bipolar” emerges later (indirectly) in a paragraph about another character, Cronenberg has already made the point plain enough in writing: “She hated her own volatility, the cycling so easily between manic confidence and crushed, hopeless insecurity.”

If Cronenberg’s description ever flags, it tends to do so when negotiating female sexual response–a tedious, but predictable lapse even in a book with so much sex. My favourite example arose when the book’s running parallel between the mysterious French philosophers and the sexually-intimate, jet-setting photojournalists is drawn explicitly, and Cronenberg writes, “The thought made her giddy, and some juices began to flow.” Cue (in my head at least) a thought-bubble blooming over dreamy Naomi, ’90s-TV-commercial-style, and in it a carton of Tropicana opened and poured into a glass.

Consumed is, in other words, not a perfect work, and it’s especially weighed down by the trap of third-act exposition, as well as the desire to make a much larger philosophical point in an otherwise lean, mean thriller. Nonetheless, this story definitely belongs in book form, and Cronenberg, master of body horror that he is, has still produced a(n unsettlingly) meaty tale.

An Irish Black Comedy on Strange, Everyday Faith

Calvary
Calvary
John Michael McDonagh
Reprisal Films

It shouldn’t surprise people that a film about religious belief might fascinate an atheist, but this fact often does. Billions of people espouse belief in a god, and so for them a god is real, which in turn makes a kind of god–the god in the minds of human beings–an unequivocal reality that informs our cultures and communities. When a film comes along that openly deals with such beliefs and their social implications, why would an atheist inherently opt out?

Calvary, written and directed by John Michael McDonagh, is an especially strong contender in this category: a film that negotiates a great many difficult human crises without offering easy answers, and which pairs the “mysteries” of a god with more pragmatic mysteries here on Earth. In particular, this film follows Father James (Brendan Gleeson), who at the outset is given notice in a confessional that he will be murdered in a week’s time for the sins of another priest. He then goes about his rounds as usual, visiting a range of conflicted parishioners in his small Irish town.

The ultimate “game” of the film, then, is trying to figure out who the would-be killer is, and waiting to see if Father James will actually die. Father James claims to know who his would-be killer is from the outset, but all viewers have is a gender and an accent, which could easily describe over half the people Father James meets. In the meantime, there are a host of other puzzles awaiting Father James on his rounds: the woman who may or may not be happily battered by either husband or lover; the prisoner torn between feeling godlike when killing and looking forward to a heaven where he no longer wants to hurt women; the atheist doctor; the smug publican; the man with everything and nothing at all; the old writer longing for a quick death.

Calvary wisely mocks the more stereotypical of these characters and scenarios, while the sense of danger for Father James nonetheless intensifies as the week progresses. But the most significant relationship in the film is perhaps that between Father James and his daughter, introduced to us after her failed suicide attempt. In one of their conversations, the relationship between earthly father and child becomes a good metaphor for heavenly father and earthly child–up to and including the child’s fear of the father going away. Since we know that Father James might literally die, this conversation invites reflection on the possible death of a heavenly father, too, and what that might mean to the human beings who remain attached to him.

This interpretation certainly resonates with other conversations throughout the film, in which Father James’ church is relentlessly identified as a dying institution by people glad to see it go, and also with the fact that his specific parish church is razed to the ground. These signs of destruction in turn make the final stand-off between Father James and his would-be killer all the more significant: If the beach where they meet is supposed to be Father James’ “Calvary”, then Father James–spiritual leader, earthly father, and vessel for metaphors about a heavenly father–has been acting in a pointedly Christ-like role throughout, with all the complicated implications therein.

Without spoiling the ending, this framework thus presents serious questions about the nature and relevance of sacrifice in relation to personal beliefs, whatever those might be. As Father James explains earlier on, he does not hate a man he yelled at in a moment of weakness; he simply thinks this man has no integrity, and that this is the worst thing he can say about a person. Calvary negotiates the importance of many moral lessons, but none more so than this: That our beliefs–be they rooted in trauma or greed or indifference or faith–are not static ideals but active principles, hurtling us against one another in all manner of not-so-easily anticipated ways.

So tread carefully, Calvary rather deftly seems to say.

Lewis Wolpert’s Gender Trouble

41i6xBTGUGL._SL500_AA300_

I own two books by developmental biologist Lewis Wolpert: How We Live & Why We Die: The Secret Lives of Cells and Six Impossible Things Before Breakfast: The Evolutionary Origins of Belief. I enjoyed the former for its simplicity, but I found the latter simplistic, presenting too narrow a claim and too selective a data set in exploring an otherwise intriguing topic.

Consequently, when I encountered an article in The Telegraph promoting Wolpert’s latest book, I wondered on which side of that precarious line (between simple and simplistic) this work would fall. The headline was not promising—“Yes, it’s official, men are from Mars and women from Venus, and here’s the science to prove it”—and despite the playfulness of its literary heritage, the title of the book itself, Why Can’t A Woman Be More Like a Man?, also invokes a terrible (and terribly long) history of women being regarded as the inferior, half-formed, child-like sex. 

Obviously, however, there are biological differences between male-sexed and female-sexed persons, and I am more than happy to entertain new information therein, so I read on. I just also know, as a doctoral candidate studying the rhetoric of science writing, that the veil of empiricism has long been used to forward poorly evidenced claims that also conveniently affirm pre-existing (and often oppressive) world views. 

In the era I study, Francis Galton, Herbert Spencer, and Charles Darwin are all guilty of this charge to varying degrees, but the nineteenth century by no means has a monopoly on the scientifically-shielded rhetoric of sexual and racial superiority. Just last month, for instance, there was outcry over A Troublesome Inheritance: Genes, Race, and Human History, wherein science journalist Nicholas Wade argued for a genetic basis behind the (stereotyped and historically skewed) behaviours and societal outcomes of the “three major races” (Caucasian, African, Asian). In a New York Times letter-to-the-editor responding to David Dobbs’ book review, 140 human population geneticists expressed disagreement with the conclusions Wade drew from their work. Wade, in turn, claims such disagreement is “driven by politics, not science”—though, again, it is the lack of concrete scientific background in Wade’s work to which these researchers overwhelmingly object.

The Case at Hand

As it turns out, similar missteps emerge in The Telegraph article, authored by Wolpert himself. Though he makes it clear he understands the controversial nature of his topic, he does little to demonstrate that his book is capable of rising above such cultural bias. Certainly, he asserts an intention to focus on the science alone, writing:

In recent years the politically correct argument has emphasised social causes to such an extent that it has sometimes virtually ignored our genetic inheritance and the role of genes. I have set out to look at the important biological evidence we may have been ignoring.

The trouble is, if this article is any indication, Wolpert has difficulty identifying and ruling out possibly mitigating factors in behavioural studies. Let’s take a look at the rhetoric in these two paragraphs, for instance:

Children have sexual feelings at a young age. Small boys often get erections after the age of about seven, and by puberty more than half of all males will have tried to masturbate. It is only when girls reach puberty that they may begin to do so. There are strong biological and also some social influences determining homosexuality. A surprising finding is that the odds of a boy being gay increase by one-third for each elder brother he has.

About half of men think about sex every day or several times a day, which fits with my own experience, while only 20 per cent of women think about sex equally often. Men are far more likely to be sexually promiscuous, a throwback to evolution where procreation was all-important. The need for a more emotional attachment found in women must also have an evolutionary basis.

I had to laugh at the first paragraph; parents of young female children will absolutely arch their brows at the claim that girls do not explore their genitals until puberty. But I am not going to scrounge around for studies to “prove” this because the data in question is not biological: It amounts to self-reporting, a form of research heavily influenced by social factors. 

(In my first year at university, for instance, I discovered that many women from more religious/conservative backgrounds had coded the term “masturbate” as a male activity, and thus something they intrinsically could not do. When explained that masturbation meant touching one’s erogenous zones—any erogenous zones—for pleasure, a conversation about the sexual exploration these women had, in fact, been doing could finally emerge. Our choice of language goes a long way to informing our research results.)

Morever, Wolpert himself notes the higher incidence of gay male persons in families with older brothers (a social influence), and then makes more claims that are wholly based on self-reporting. Unless Wolpert’s book shows some neurological basis for the claim that men think about sex more than women, he is simply responding to sociological research skewed by the cultural factors that frame male and female sexual self-disclosure.

Indeed, a good example of this disconnect between self-reporting and actual biological response even emerges in a study he later alludes to—a study which itself attests to the wide reach of female arousal. When he writes, “In contrast, both male and female erotica cause sexual arousal in women, whether heterosexual or lesbian,” I recalled the research reviewed in the New York Magazine article, “What Do Women Want?” This study involved both self-reporting and biological monitoring of male and female persons (straight, gay, and bisexual alike) in response to a range of visual stimuli. The results were remarkable: Women showed patterns of arousal so indiscriminate that even pictures of bonobo coitus got their “engines” running—but do you imagine for one second that these women self-reported the same, full range of response? (It’s a good article—well worth the read.)

More trouble emerges in the assertion that “[m]en are far more likely to be sexually promiscuous, a throwback to evolution where procreation was all-important.” Putting aside the same cultural issue with self-reporting, there are two problems with this claim: 

1) We live in an age of genetic testing, which has offered a striking fact of non-paternity: Averaging between different population totals, 1 in 10 children are not biologically spawned by their fathers. This figure is as low as 1% in some populations, and as high as 30% in others, which itself attests to a strong cultural influence in patterns of non-monogamy. Keep in mind, too, that not every act of infidelity will culminate in a child, and these figures offer a level of environmental complexity that this article does not even entertain as a possible source for divergent sex-based outcomes.

2) We also live in an age of re-testing and retraction, which as of late included a resounding challenge to the classic fruit fly study used to argue that males benefit more, evolutionarily, from promiscuity. Bateman’s 1948 research is a perfect example of a study with findings that went unquestioned because they reinforced pre-existing cultural beliefs. However, when recent researchers attempted to replicate his findings, they found serious problems with the construction of his data set, and an underlying bias towards using progeny to identify fathers alone (as if a spawned fly does not need both mother and father to exist). Suffice it to say, then, such research is “showing its age”, and at present the verdict is still out on whether males disproportionately benefit from non-monogamy.

Finally, as rhetoric goes, the last sentence in the above excerpt is the sloppiest: “The need for more emotional attachment found in women must also have an evolutionary basis.” The first issue I have with this sentence is the vagueness of its conclusion: Wolpert initially claims “biological evidence” to be the focus of his book, but “evolutionary basis” can mean a couple things. We might be looking at this issue genetically, or we might be looking at it in relation to cultural memes. Wolpert is not clear in this regard, allowing him to shuffle this comment about female emotional attachment in with the rest of his “biological” claims as if they were one and the same: As it stands in relation to the “evidence” he forwards here, they are not.

My second issue is with the word “must”. Really? Well, that was a fiendishly quick, non-scientific way of dispensing with alternatives. It may well be that there are concrete, genetically-moored differences underlying these relationship behaviours, but in the absence of clear genetic evidence to this end, one must at the very least eliminate cultural factors first. 

For instance, in the Mosuo culture of Southern China, where women hold the bulk of social power and are entirely in control of their sexual encounters, an entirely different set of relationships exists between the sexes, both when it comes to sexual solicitation and to childrearing. This would be a perfect example of different cultural memes informing different sexual behaviours, but such communities are thus also a thorn in the side of anyone attempting to mark disproportionate female “emotional attachment” as having a definitively biological origin.

Why does any of this matter? 

Simply put, it is disingenuous and counterproductive to present a binary between a “politically correct” version of scientific research and a “just-the-facts” version that appeals to objectivity, but ultimately hides behind the “common sense” argument of dominant cultural beliefs. If we assume that the pursuit of scientific knowledge stands to benefit human beings by providing us with the most accurate understanding of our world and our interactions within it, every effort should be made to ensure that accuracy, not personal affirmation, is the ultimate aim of all research.

What makes this aim difficult, granted, is that we are all human, and as such, liable to jump to conclusions that confirm pre-existing biases. This has to be forgiven, to some extent, but we commit an additional, wholly avoidable mistake in pretending that any data we gather can exist outside this cultural lens. Mistakes like this only serve to uphold that initial bias.

To use an example within Wolpert’s sphere of inquiry, consider the well-known fact that female persons (on average) perform poorer on spatial reasoning tests than male persons (on average). The jury is still out on whether this, too, is cultural (for instance, consider the study comparing puzzle completion times in matrilineal and patrilineal societies), but let us imagine for a moment that it is entirely biological. In the context of our culture, with its long history of regarding female persons as intellectually inferior, such data is often read in lockstep with the conclusion that female persons simply do not have the aptitude for, say, careers in STEM subjects. 

Now, putting aside that there are many detail-oriented skills for which female persons (on average) outperform male persons (on average), imagine if the same determinist conclusions were drawn from the equally well-known fact that male persons (on average) lag significantly behind female persons (on average) in reading comprehension. Even if we similarly discovered a fundamentally biological origin for this gap, could you imagine anyone then seriously concluding that male persons do not have the aptitude for, say, careers in law, literature, policy-making, or academia?

Of course not. But precisely because such sloppy conclusions are routinely drawn around facts that seem to fit a pre-existing cultural narrative, science writers have a responsibility to ensure that, if they are going to arrive at conclusions that will reinforce already-oppressive cultural narratives, their evidence- and argument-based paths to these conclusions are impeccable.

Both gaps, by the way, can be bridged to some extent, as evidenced by a range education strategies implemented in response to the observed existence of such gaps in the first place. In social transformations like these, we thus see the tremendous, real-world benefit of knowing as much as we can about actual human beings and their environments.

The trouble simply arises when science writing touted as empirically rigorous seems to be neither, as in the case of most of the examples Wolpert presents in this Telegraph article. I am not advocating for a “politically correct” science by any measure; I simply expect that, in the absence of definitive biological evidence for specific gender stereotypes, a seasoned science writer will a) recognize his cultural context, up to and including personal biases, and b) take better care in addressing and excluding other possible explanations for the sex-based divergence of specific human behaviours before claiming a fundamentally biological causation.

Wolpert might do all this in the book itself, granted. He might painstakingly review recent challenges to old, status quo research on male/female aptitudes and sexual proclivities. He might likewise acknowledge the dangers of a scientific over-reliance on self-reporting, and more openly concede that there is an important difference between the evolution of “memes” and genes. And if he does all this in Why Can’t A Woman Be More Like A Man?, his argument might very well hold together on a strictly empirical accord.

I’ll never know, though, because this promotional piece of his does little to inspire reading on—and the literary world is filled with so much more.

On Authority: Why Paying Attention to How We Pay Attention Matters Most

The world did not sit idly by while I studied for my final doctoral exam this summer. While I read nineteenth-century science textbooks, philosophical treatises, works of natural theology, university lectures, experiment write-ups, and a range of fictional accounts involving the natural world, violence swelled about me—about us all—in its many awful forms.

Outside mainstream news, I knew Syria, Myanmar, and the Central African Republic were still sites of strife and potential genocide. Meanwhile the urgency of other brutalities, like news of the kidnapped young women in Nigeria (who for the most part remain prisoners today, along with newer victims of Boko Haram), had begun to fade in the Western press—still horrific, but nowhere near as immediate as word of the tortured and murdered teens initially said to have instigated fresh hostilities along the Gaza Strip. Against the photo essays that emerged from Israel’s ensuing airstrikes—the pain and the loss and the fear writ large on so many faces—events elsewhere took an inevitably muted role in Western coverage.

Even macabre word of a new organization sweeping across Iraq, openly murdering religious minorities or otherwise cutting off access to resources and regional escape, did not gain as much media traction for most of the summer, despite these events prompting forms of American aid-based intervention. Rather, it would take the beheadings of American and British citizens for ISIS to take centre stage in Western media—a position it currently occupies alongside a sweeping NFL scandal in which celebrity sports stand indicted for their role in a broader culture of domestic abuse.

Granted, the US had other issues in the interim between Israel and Iraq, with widely resonant protests emerging from Ferguson, Missouri, after police officer Darren Wilson shot and killed Michael Brown on August 9. (For some sense of the event’s global reach, consider that Palestinians were sending online tips to American protestors in the wake of aggressive anti-demonstration tactics.) And while the racialized outcry underlying this situation should have come as no surprise to most, the “worst Ebola outbreak in history”—which reached Global Health Emergency status just days before Ferguson, and has continued to spread since—revealed a systemic battleground all its own: A vaccine exists. Canada has “donated” 800–1,000 doses. The WHO will “decide” how these are dispersed.

(And speaking of Canada, lest I be labelled one of those northerners who calls attention to US crises without looking inward: This summer’s news also brought into focus our own, systemic issues with the indigenous community — peoples at higher risk of lifelong poverty, incarceration, all manner of abuse and disease, going missing, and turning up dead.)

The above are by no means overtly malevolent facts—the WHO, for instance, is attempting “ring vaccination” while efforts to accelerate drug production proceed—but the systems of power these terms invoke do exemplify the very status quo of global disparity (in overall affluence, levels of health education, and resource mobility) that foments such virulent outbreaks in the first place. There is violence, in other words, even in systems that seem orderly and objective, and we cannot ever discount the role that language plays in reinforcing a deadly world.

With this in mind, I was struck this summer by both how much and how little attention the rhetoric of authority claims received amid this coverage. In the “much” column we have, for instance, the work of Gilad Lotan, who crunched an immense amount of social media data to identify the information silos in which followers of Israeli-Gazan conflict tended to position themselves—each “side” receiving and reinforcing different news items through like-minded media outlets. We also have reflections like that of John Macpherson, who explored the professional tension between objectivity and emotion in photojournalism, and just recently, a poll of Ferguson’s citizens, which indicates an extreme racial divide among individuals tasked with interpreting the events of August 9.

But underlying these pieces is also the paucity of self-reflection: Lotan’s data sets would not be as impressive if the vast bulk of readers were not so starkly divided in their consumption of news media. Nor, too, would the recent Ferguson poll pack quite the wallop without so many participants deciding definitively, incompatibly, and above all else culturally what happened between Darren Wilson and Michael Brown.

I should not be surprised, granted: The study of “how we know what we know” is a difficult one even when we raise up children with the vocabulary to consider such questions (whatever that vocabulary might be: a point to be revisited below)—and an almost Herculean task once we, as adults, have settled into a way of thinking that seems to serve us well.

Indeed, though I spent a great deal of time thinking abstractly about knowledge this summer, I also often found myself confronted by articles so provocative, I had to share them immediately, and to rail vehemently about some key point therein. Each time I indulged, though, one of two things happened: I either did research soon after that undermined the legitimacy of the original piece, or found myself too deeply affected on an emotional level to engage with earnest responses in any other register.

Knee-jerk reactions are, of course, common, and understandably so. In a practical, everyday sense, we almost by necessity draw conclusions that gesture towards deductive reasoning, while actually better resembling glorified gut instinct: The sun will come up because it always comes up. You just missed the bus because you always just miss the bus. That danged table leg caught your toe because it always catches your toe. And so the fuzzy thinking builds up.

Above all else, we acclimate to what is familiar. We grow comfortable in whatever strange, personal assumptions have long gone uncontested. Our propensity for confirmation bias then fills in the gaps: We know that an article retraction never fully dislodges belief in the original article. We know that narratives aligning with pre-existing beliefs will continue to be referenced even when shown to be untrue. We know that following an event in progress, acquiring facts as they unfold, invariably leads to high amounts of inaccurate knowledge difficult to dislodge.

So it is with human beings, who have jobs to go to, children to care for, relatives and friends to attend to, and the self to care for after hours. How might we even begin to overcome such shortcomings when the cadence of our lives often seems antithetical to deep reflection?

The dangerous answer has often been the unquestioned affectation of an orderly and objective system of thought. This differs from an actually objective system of thought—an unattainable ideal—in that, even if we can provide a full and logical progression from propositions A to B to C, the validity of these propositions will still inevitably be tied to their cultural context, from whence more of that messy, glorified gut instinct emerges.

As a doctoral student, I study science writing in the nineteenth century, so I have seen this flawed affectation play out throughout history. In particular, I can demonstrate how the rhetorical strategies of inference and analogy allow respected authors to leap from specific sets of empirical knowledge to broader, un-evidenced claims that just happen to match up with the authors’ pre-existing views on, say, “savage” cultures or female intellect. This is by no means a mark against empirical evidence; human beings are simply very good at making unjustified connections between data sets, especially when such connections can reaffirm what one already believes.

Similarly, in the Ferguson shooting and Gazan conflict this summer, “facts” quickly flew into the realm of inference, with terms like “witness” and “casualty” and “discrepancy” taking on markedly different characters depending on their source. Just as Michael Brown underwent three autopsies, so too has all manner of “hard” data in both cases been sifted through to exaggerated effect, and always with that human inclination towards finding a story that fits by any means, however loosely deductive.

In short, the danger of affected objectivity is that it cannot exist apart from the irrationality of those human beings applying it. Nonetheless, the “fair and balanced” approach to a given situation is often positioned as the only “reasonable” path to change or justice—a claim wholly disregarding that, for millions of human beings, the groundswell of personal experience, community anecdote, and emotional outpouring is the only “truth” that ever matters in the wake of violent world events.

When dealing with the rhetoric of authority claims, and their role in how we respond to violent events, our crisis thus cannot get much simpler than this: We need to recognize ourselves as emotional beings, with emotional prejudices derived from everyday experience as much as personal trauma, when engaging with narratives of violence—narratives, that is, which by their very nature tend to be emotionally charged.

This is not about ceding the role of logic in response to serious and dramatic world and local events. Nor is this about forsaking all available evidence and refusing ever to make safe assumptions about a given situation or issue. This is simply about recognizing ourselves as predisposed filters of wide swaths of competing information, and making an effort not to act as though we have a monopoly on truth in situations involving other human beings.

This may seem straightforward, but our patterns of news consumption this summer, as well as the activist strategies that emerged in response to a variety of issues, suggest we have a long way to go. While protests tend to arise from an urgent and legitimate place of outrage, an effective response to systemic abuses must not be based solely on popular outcry, or else it risks establishing a “rule of mob” in place of “rule of law”. Conversely, though, the rhetoric of “rule of law” and affectations of objectivity go hand-in-hand: If the former is not sufficient to address a given crisis, we have to take even greater care with the sort of authority claims we accept, lest they obscure any truly drastic changes needed to better our world.

Back in Business

It’s been a long slog of a summer, and my return from vacation left me with a surprise that will be no surprise to anyone in academia: The slog never ends!

Nevertheless, I’ve been holding off on much in the way of commentary while trying to focus on my doctoral studies, and so very much has suffered for it. A writer needs to breathe in many directions, and this myopic attention to academic detail has caused more in the way of anxiety than necessary. So! It’s time to get back to posting here, and writing and submitting fiction, and generally returning to the fray of real-world debate. I look forward to it all.

One quick note for the moment: I celebrated a wee publication in my relative absence from this blog. “The Last Lawsuit” was published in Bastion Magazine, a newer online venue of rising acclaim. This publication of mine is the last coming down the pipe at the moment, due to my incredibly poor submissions schedule these last few months, but it was a pleasure to work with the editor-in-chief, R. Leigh Hennig (and the art and other content was pretty nice, too!). Now it’s time to knuckle down, write, and submit anew.

I hope everyone reading this has had far better luck maintaining “balance” between all their interests this last while. Best wishes to you all!

At Immortality’s End: The Bone Clocks and the Trouble with Grand Narratives


The Bone Clocks
David Mitchell
Random House

Since completing my doctoral candidacy exams last week, I’ve been reading ravenously, for pleasure. First I romped through some classic Philip K. Dick, then delighted for complicated reasons in Mary Doria Russell’s The Sparrow, and began and finished David Mitchell’s The Bone Clocks on a very delayed set of flights to San Francisco, where I’m enjoying a week’s reprieve before returning to my studies and writing in earnest.

I debated reviewing Mitchell’s latest novel, a work operating in the wake of Cloud Atlas‘s immense success, and thus aspiring to the same sort of grand, unified theory of human history with fantastical elements. There were parts of the work that especially weighed against my decision to review it, not least being the tediously rehashed convictions that men and women are essentially from other planets where love, sex, marriage, and the like are concerned. It may be true for some people, but I know plenty of human beings for whom that kind of trope is beyond worn out, and I wondered how much time I wanted to give to a novel that presents such views as universal truths even when it has characters gender-flipping through the centuries.

(I should probably add, as a caveat, that some of this banality certainly belongs to the characters–one of whom, a smug 21-year-old with access to wealth and an elite education, offers up euphemisms for his sexual conquests that I don’t doubt his character would think extremely witty, but which might easily earn Mitchell a Bad Sex in Fiction Award next year. There’s a difference between individual characters espousing dated views, though, and the author providing nothing but such views across character perspectives, across time, and throwing in some straw-people to further reinforce gendered caricatures.)

Ultimately, though, I wanted to reflect on one of the themes Mitchell is clearly wrestling with, because his work is, if nothing else, “ambitious”–a term usually given to a sprawling, dreaming text when the reviewer wants to avoid answering whether or not all that ambition pays off. You might have heard of this book as the one that starts off “normal” and then descends into vampirism; this, I would say, is a wilfully obtuse way of reading an attempt to construct a different kind of supernatural villain that preys on human beings. (Then again, when Dracula was first released, some reviews referred to the monster as a “werewolf” because the vampire archetype hadn’t stabilized yet, so perhaps it’s inevitable that some people will flatten the new and strange to fit existing moulds.)

This does touch, however, on a more important question I think is being asked of Mitchell’s work, which starts in 1985 and ends in 2043: Did this book need to shift into fantasy and “cli-fi” (speculative writing predicated on climate change anxieties) in order to wrestle with its central theme?

Immortality, as much of the book grapples with the subject, is a mundane crisis in most of its incarnations: We start out with 15-year-old Holly Sykes, waking in the radiant, urgent glow of a love she is convinced will last forever. This book spans her lifetime, but never again with as much verve and intimacy as the opening section, in which this conviction is swiftly quashed and consequences follow. The next section introduces Hugo Lamb, a callous young man who has a fleeting encounter with Holly before choosing a different path towards immortality. The section after that involves a war journalist whose desire to reclaim lost histories is a toss-up between trying to leave his mark in the world and the compulsive behaviour of an adrenalin junkie. Then we enter the most self-conscious, mid-life-crisis-y section of the book: a set of chapters written about one Crispin Hershey, an author struggling with everything from vicious to lacklustre responses to the work following his last, wildly successful novel.

After this, though, a plot that’s been mounting in the sidelines finally comes to the fore. To this end “Marinus”, one of the involuntary immortals who slip from lifetime to lifetime without doing any harm to their hosts, narrates a series of escalating stand-offs against the Anchorites, a secret order of human beings who prolong their lives and gain access to immense “psychosoteric” powers by sacrificing innocent people with active chakras. So you can see why this additional immortality crisis, entirely predicated on a fantastical set of events, might be a bit jarring for some: The anxieties and approaches to immortality addressed in prior sections lie at the heart of most literary fiction: How to live a good life. How to be a good person. How to make a difference. For people seeking answers to (or at least new reflections on) such questions, Mitchell’s choice to play out this theme in an otherworldly arena is unlikely to satisfy.

Ultimately, though, I sense that Mitchell knows this, because when we return to Holly Sykes, at a time when civilization as we know it is breaking down due to climate change, her part in preceding events has passed from immediate relevance to something akin to a dream. An adventure has happened, certainly–one among many–and the course of her life has exposed her to a wide range of engagements with attempts to live well. (And if you get more than a slight whiff of The Road in ensuing proceedings, you’re certainly not alone.) But to what end?

I’m struck, then, by The Bone Clocks on a few levels, because like Cloud Atlas it builds from everyday lives and reaches for a grander narrative of human existence–and like Cloud Atlas, I think it falls short because its grander narrative is inevitably reliant on the fantastic for continuity and closure.

Self-commiserating diversions aside, though, the book offers sections of very smart prose, and Mitchell goes to great lengths to build a global narrative (this mostly-British work also spanning locales like feudal Russia, Aboriginal Australia, and post-9/11 Iraq through flashbacks, secondary narratives, and exposition dumps)–so I don’t consider Mitchell’s book incompetent by any measure. Indeed, I enjoyed large swaths of it, but I was still left with two feelings at the end: One, that the ending does not satisfy; and two, that the ending cannot satisfy, because the push to immortalize ourselves–through children, through work, through love, through fame, through heroics–is always precarious, and perhaps just as always futile.

If Mitchell’s The Bone Clocks comprises a full range of human endeavours towards immortality, then, how can it be anything but a chronicle of failures, both personal and species-wide? The question I’m left with is thus not whether Mitchell’s “ambitious” book succeeded, whatever that might mean; I simply wonder what challenges such a thematic dead-end offers up to future writers. Do any of us know just how to break the mould?

Revolution Express: One Big Train with a Whole Mess of Semi-Allegorical Parts


Snowpiercer
Joon-Ho Bong
MoHo Films / Opus Pictures

One of the most amusing reviews I’ve read of Joon-Ho Bong’s first English-language film forwards the complaint that, for a movie set within a relentlessly speeding train, there sure are a lot of still camera shots in Snowpiercer. Since this train is the entire universe for the last survivors of humanity, after a failed attempt to halt global warming turned the world into a popsicle 17 years ago, such a grievance is equivalent to complaining about any film set on Earth that doesn’t constantly remind us that we’re hurtling at 108,000 km/h around the sun, and 792,000 km/h around the centre of the galaxy.

Though there are few poor reviews of Snowpiercer, what others exist likewise call attention, in their fascinating nitpicking, to the tenuous tightrope Bong walks between realism and allegory, especially within the Western film tradition. Abrupt tonal and mythological shifts in this narrative–which would have been completely at home in South Korean, Chinese, or Japanese cinemas–here serve as a reminder of how literalist North American scifi/action films tend to be (Elysium, for instance, and The Purge, and pretty much any apocalyptic/post-apocalyptic film in recent years, with perhaps the exception of The Book of Eli.)

Snowpiercer is in fact two stories: the story of one lower-class revolt on a stringently class-based train, and a metacommentary on the closed-system nature of stories about revolution in general. I’m not speaking figuratively, either: Bong makes that second story damned clear when he repeatedly emphasizes the existence of a sketch artist among the lower-class “tail section” denizens, whose work raises to hero and martyr status those involved in resistance efforts. In short, the story of the revolt is being created alongside the revolt itself–a fact a later character will also, more blatantly observe when All Is Revealed.

In consequence, if you simply focus on all the details in this film they will surely drive you batty: how little sense various population numbers make in their contexts; why an opening title card lies to you; why anyone with backstory X should be repulsed by the truth of the tail-section’s food source; how this speedy train takes a whole year to make one trip over its entire route; and why a machine part that performs a fairly straightforward, automatic motion could not be replaced when so many other, luxury items seem to appear from nowhere.

There are two ways to respond to these inconsistencies. The first is the route taken by many a negative reviewer–to hold these and the initial presentation of stock archetypes against the director, as signs that this film is too ridiculous and/or two-dimensional to merit serious consideration. The second is to remember that the entire premise of this film is completely bonkers–a massive train filled with the last of humanity, which makes a circuit of the whole world at breakneck speed during apocalyptic fallout–and then to assume that the director is aware of this absurdity, and to start paying attention to other weird elements therein. These include, but are not limited to: the mystic sensibilities of Yona (Ah-sung Ko); the hyperbolic absurdity in certain cars (e.g. the New Year’s scene, and the classroom car); a synchronicity between the mood in a given first-class car and the concurrent mindset of our presumed hero, Curtis (Chris Evans); the fight scenes that immediately become legend; the seeming impossibility of killing off certain bad guys; and the staging of various human beings in thematic tableau (right down to colour contrasts) for the final stand-off.

Read together, Snowpiercer is very clearly meant as an allegory–and not just for class struggle in our world but also for how class struggle can itself be a tool of oppression. As an indictment of audience expectations, Bong’s latest also has something to say about how the conventional yearning for new leadership is in many ways no revolution at all. He does this, too, while having characters resist both Western and Eastern conventions: While Yona’s father (Kang-ho Song) tries to preserve her in the role of passive participant (following a men-act/women-dream motif that emerges in quite a bit of South Korean cinema), Yona breaks from this role in the third act, while Octavia Spencer, as Tanya, gets to be part of the revolt all throughout (instead of wringing her hands while strapping menfolk try to recover her child). There’s definitely something to be admired, too, about not treating a wildly improbable scifi/action premise as anything more than mythopoetic backdrop.

However, much as I disagree with the nitpicking of negative reviews for this film, and much as I see myself watching this film a second time down the line, I did find the quest template so mundane at times that the film never quite stabilizes on this higher, allegorical plateau. Yes, Snowpiercer shatters all its stock hero archetypes in the end, but for most of the film Curtis is still the young, reluctant leader of a revolution against Wilford (Ed Harris), the man who built and runs the train, while John Hurt plays a fairly standard Wise Old Mentor in Gilliam, and Jamie Bell, your standard young recruit to the cause (Edgar).

Moreover, one first-class enemy, Mason (Tilda Swinton), does not acquire full coherence as a character until a gesture during a provocative early speech is repeated near the film’s end; and even then, if you accept what’s implied about her character at that juncture, it means accepting yet another inconsistency–this one related to Curtis’s supposed uniqueness among all on board the train. (But if we don’t accept it, we’re left with just another a two-dimensional villain, so it’s a tough call.) When we do get to the end, too, the relentless exposition dumps (another standard feature of quest narratives) are themselves played straight–which means that after almost two hours spent undermining the typical quest narrative’s structure, we’re left with a pretty boilerplate reversion to form for the close.

Maybe this is just the nature of the beast, though; maybe there really is no escaping the world–its most familiar narrative traditions–that train. What would it mean for us to step utterly outside all three? And what alternatives might be waiting for us if we did? For all that it might wobble on its tracks, Snowpiercer moves with distinct thematic purpose–even if its final destination seems an eternity from sight.

Excerpts from the Day’s Readings: Auguste Comte and Slippery Mental Frameworks

I’ve been sitting on a more reflective post, but the long slog of daily readings persists, along with a few other inanities of doctoral student life, so that essay will have to wait awhile. For now I’m just under a month from my last (I hope) major doctoral exam, and today’s readings are two books by Victorian thinkers (John Stuart Mill and G. H. Lewes) on the early philosophy of continental writer Auguste Comte, best known as the father of positivism, but also as a critical figure in the rise of humanism.

This last is an especially intriguing point for modern reflection. I’m not a fan of atheist “churches” or related assemblies, as are forwarded by some non-religious people today, but I’m even less a fan of pretending that these ideas are anything new. Put simply, in his later life, after struggling with mental health concerns both within and outside institutions, divorcing in 1842 (a rare and serious act in the 19th century), and losing a close platonic friend in 1846, Comte’s views and applications of positivism changed drastically… and he created his own, secular church.

Indeed, from the ideal of this dead friend, Clotilde de Vaux, as a moral paragon in her feminine virtue, came the “Religion of Humanity”–a ritualistic faith patterned after the Catholic Church, with Comte as the “high priest” (read: pope), and women and the working class primarily targeted for conversion therein. As Richard G. Olson notes in Science and Scientism in Nineteenth-Century Europe, this whole movement also prompted quite a few philosophical reversals for Comte. For instance: “Whereas in the Positive Philosophy Comte had complained that sociology had heretofore been crippled by the failure to subordinate imagination to reason, in the Positive Polity he touted the importance of imagination and claimed that ‘Positivism is eminently calculated to call the imaginative faculties into exercise.'”

This is the sort of stark mental shift one must expect as plausible in all human beings–even (or perhaps especially) those who significantly contribute to a number of fields. And sure enough, Comte made a significant impact on both the philosophy of science and the development of the social sciences. To this end, he outlined three stages of societal development towards the apprehension of truths: the “theological”, which has three phases of interpreting active, conscious wills at work in the world; the “metaphysical”, which involves abstract, idealized concepts that nature moves toward without the need for conscious wills; and finally the “positive”, in which one recognizes the critical role of observation and positive verification of hypotheses in the physical sciences before the benefits of empiricism can be turned to the study of human society. From this framework emerges a coherent rationale for valuing experimental findings when seeking to describe the world: in short, a 19th-century advancement of the scientific method.

It bears noting, too, that Comte’s negotiation of both the physical sciences and the social sciences held serious philosophical weight at the time. This was an era when philosophical doctrines like utilitarianism, which advocates as morally correct whatever action maximizes the overall good, were crudely applied to sociopolitical theory on the back of ill-gotten and ill-used “facts” about the world. As I mentioned in my post about John Kidd, for instance, there were certainly men of prominence with skewed notions of what the overall “good” looked like: Leaning on natural theology, Kidd especially argued that Britain’s social welfare system took away the opportunity for the poor to practise (Christian) humility, and the rich to practise (Christian) charity, with members of all classes resenting each other in consequence.

Nor did such manipulations of worldly “knowledge” escape public notice: Dickens railed against a caricature of utilitarianism in Hard Times (1854), arguing that actions taken from a place of pure reason could produce nothing but social and individual misery. While his caricature lacked philosophical finesse, it was not far off the mark from how major leaders of industry and government officials were actively distorting such ideas to their own economic advantage. Though originally from the continent, Comte’s work–first translated into English by Harriet Martineau in 1853, but widely known in England prior–thus offered a more coherent and widely-accessible method of making inquiries into the state and needs of the world. As Martineau writes in her preface to The Positive Philosophy of Auguste Comte (1853):

“My strongest inducement to this enterprise [of translation] was my deep conviction of our need of this book in my own country, in a form which renders it accessible to the largest number of intelligent readers. We are living in a remarkable time, when the conflict of opinions renders a firm foundation of knowledge indispensable, not only to our intellectual, moral, and social progress, but to our holding such ground as we have gained from former ages. While our science is split up into arbitrary divisions; while abstract and concrete science are confounded together, and even mixed up with their application to the arts, and with natural history; and while the researches of the scientific world are presented as mere accretions to a heterogeneous mass of facts, there can be no hope of a scientific progress which shall satisfy and benefit those large classes of students whose business it is, not to explore, but to receive. The growth of a scientific taste among the working classes of this country is one of the most striking of the signs of the times. I believe no one can inquire into the mode of life of young men of the middle and operative classes without being struck with the desire that is shown, and the sacrifices that are made, to obtain the means of scientific study. That such a disposition should be baffled … by the desultory character of scientific exposition in England, while such a work as Comte’s was in existence, was not to be borne, if a year or two of humble toil could help, more or less, to supply the need.

In short: Martineau’s translation of Comte’s work offered a philosophical foundation for empirical inquiry that would allow a wider range of persons to evaluate any “facts” put before them about how the world should be, and why, on the basis of how the natural world currently is and the natural laws it summarily follows.

In his later evaluation of Comte’s work, Mill takes particular care to negotiate the metaphoric landscapes that don’t translate well (a word in French, for instance, having a different cultural history than even its closest approximation in English), but he also takes care to note that Comte’s work also addresses how huge paradigm shifts change an entire culture’s consciousness–and how readers in any climate would do well to take similar care not to repeat their predecessors’ ideological errors. In relation to Comte’s second stage, for instance, Mill writes:

In repudiating metaphysics, M. Comte did not interdict himself from analyzing or criticising any of the abstract conceptions of the mind. … What he condemned was the habit of conceiving these mental abstractions as real entities, which could exert power, produce phaenomena, and the enunciation of which could be regarded as a theory or explanation of facts. Men of the present day with difficulty believe that so absurd a notion was ever really entertained, so repugnant is it to the mental habits formed by long and assiduous cultivation of the positive sciences. But those sciences, however widely cultivated, have never formed the basis of intellectual education in any society. It is with philosophy as with religion: men marvel at the absurdity of other people’s tenets, while exactly parallel absurdities remain in their own, and the same man is unaffectedly astonished that words can be mistaken for things, who is treating other words as if they were things every time he opens his mouth to discuss. No one, unless entirely ignorant of the history of thought, will deny that the mistaking of abstractions for realities pervaded speculation all through antiquity and the middle ages. The mistake was generalized and systematized in the famous Ideas of Plato. The Aristotelians carried it on. Essences, quiddities, virtues residing in things, were accepted as a bona fide explanation of phaenomena. Not only abstract qualties, but the concrete names of genera and species, were mistaken for objective existences. … To modern philosophers these fictions are merely the abstract names of the classes of phaenomena which correspond to them; and it is one of the puzzles of philosophy, how mankind, after inventing a set of mere names to keep together certain combinations of ideas or images, could have so far forgotten their own act as to invest these creations of their will with objective reality, and mistake the name of a phaenomenon for its efficient cause.

Mill goes on to point out that this is precisely the point of Comte’s three stages–this metaphysical fixation on abstracts-as-absolutes being an intermediate phase in humanity’s approach to understanding the world, somewhere between using words to invoke notions of a divine will and “the gradual disembodiment of [such] a Fetish”, whereafter words are simply used to identify phenomena with consistent and natural causes that can be understood through empirical inquiry.

In all the works I’m reading today–referencing Martineau’s 1853 translation and Olson’s modern literary criticism while delving into Mill’s and Lewes’ 19th-century British revisions of Comte’s doctrine of positivism–the overwhelming theme is thus one of mental frameworks in flux. On an individual level, we see this in both Comte’s personal life and argumentative double-standards that persist in all eras. Likewise, on a societal level, massive paradigm shifts mark the whole of our written record, while the impact of a given philosophy even within a specific period is by no means culturally stable.

To my mind, a doctoral programme in the humanities tasks its students to live in a similar state of flux: capable of holding a wide range of competing histories and ideas in unrelenting tension. The trick is that, both throughout and at the end of this process, I also need to be able to synthesize these tensions concretely for a wide range of audiences. I haven’t yet mastered this last part, but… I’m getting there, I hope. One bloody book at a time!

Cheers and best wishes to you all.

Conversation Enders: The Problem with Hero-Worship

Working part-time at a local bookstore is a great reprieve from the isolation of my studies. Just as I get to know many customers’ personal lives, so too have many of them learned that I’m a doctoral student working towards her (hopefully) last major proficiency exam. When they ask me what I’m reading that day, I therefore have an opportunity to frame my studies as something useful for a general audience–and sometimes this effort goes well, but at other times the real learning experience is my own.

Two weeks ago, the book of the day was Charles Darwin’s The Descent of Man (1871), a work I’d only read excerpts from in the past. When a customer asked about its relevance, I explained that this was the work in which Darwin–ever tentative about rocking the boat with his research–made explicit that human beings were subject to his theory of evolution by natural selection, too. This book caused tremendous controversy for precisely that reason, though Darwin had gone to great lengths to forestall his comments on human evolutionary behaviours until after extensive (and I mean extensive) review of the physiognomy, general behaviour, and mating pressures among various species of molluscs, fish, insects, birds, quadrupeds, and other primate species first.

Darwin received considerable criticism and ridicule for The Descent of Man (1871), which solidified the ideological “threat” first intimated in On the Origin of Species (1859), by openly integrating human development into the theory of evolution by natural selection.

But The Descent of Man has cultural significance in another capacity, too, so my synopsis for the customer included that this was also the text in which Darwin, every bit a person of his time, corrals his extensive field research on other species to make sweeping comments about the mental inferiority of women, to say nothing about the general inferiority of non-white persons. For instance:

“The chief distinction in the intellectual powers of the two sexes is shewn by man’s attaining to a higher eminence, in whatever he takes up, than can woman—whether requiring deep thought, reason, or imagination, or merely the use of the senses and hands. If two lists were made of the most eminent men and women in poetry, painting, sculpture, music (inclusive both of composition and performance), history, science, and philosophy, with half-a-dozen names under each subject, the two lists would not bear comparison. We may also infer, from the law of the deviation from averages, so well illustrated by Mr. Galton, in his work on ‘Hereditary Genius,’ that if men are capable of a decided pre-eminence over women in many subjects, the average of mental power in man must be above that of woman.”

“It seems at first sight a monstrous supposition that the jet-blackness of the negro should have been gained through sexual selection; but this view is supported by various analogies, and we know that negroes admire their own colour. With mammals, when the sexes differ in colour, the male is often black or much darker than the female; and it depends merely on the form of inheritance whether this or any other tint is transmitted to both sexes or to one alone. The resemblance to a negro in miniature of Pithecia satanas with his jet black skin, white rolling eyeballs, and hair parted on the top of the head, is almost ludicrous.”

I wouldn’t call it “enjoyable” to read such assertions–to encounter work after work (especially ones written from a position of authority, be it scientific, religious, or political) making such petty, ignorant comments at the expense of other human beings–but as a student of literary history, I find neither of these to be shocking or exceptional prejudices. They hurt, granted, but they hurt in largest part because they attest to much broader histories of exclusion and oppression. I do tend to forget, however, that many others have a different relationship with persons of note: a relationship that tends to cushion the individual from their context whenever we like something that individual did. And indeed, the customer who’d first asked about my reading was deeply troubled by my summary. “Darwin said that?” he said. “Darwin believed that?”

I tried to emphasize that Darwin’s comments did not erase his many positive contributions, but the damage was done. To try to offset these uglier aspects of Darwin’s biography, I then blundered further, by pointing out that even prominent early-20th-century suffragists, women who made great strides towards gender equality under the law, still advocated (as a great many did at the time) for eugenics policies–but this only saddened the customer further.

Now, by no means do I consider this customer’s reaction unique, but it was affecting, and I am more familiar with the other side of this flawed argument: people, that is, who will dismiss any significant contribution by a prominent individual because of some perceived failing elsewhere in their biography.

Last year, for instance, while studying for my first major exam, I made the mistake of marvelling at an historical echo: comparing, that is, John Stuart Mill’s succinct moral argument against Christianity (as found in his 1873 Autobiography, describing his childhood move from religion) with the equally succinct moral argument against Christianity used by Christopher Hitchens in more recent debate. Both regarded the notion of vicarious redemption through Christ as morally bankrupt, so the only real difference was that Hitchens could add, through a conservative estimate of the age of our species provided by modern anthropology, the absurdity of believing that a loving god watched “with folded arms” for some 95,000 years before acting to redeem the species, and even then only through barbaric sacrificial rites.

My fundamental point entailed how little had changed in these arguments–how vicarious redemption was an affront to young Mill in the early 19th century just as it was to seasoned Hitchens in the early 21st century–but my colleague interjected by shifting the conversation. This person was incredulous that I would invoke Hitchens at all, with his foreign policy views being what they were–and didn’t I know what kind of uncomfortably antiquated views he once shared about working women and motherhood?

My customer’s implicit tethering of historical significance to modern moral character, as well as my colleague’s dismissal of an argument on the basis of the speaker’s other beliefs, both rely on a fallacious connection between a person’s assertions in a given field, and that person’s actions in another. This isn’t to say that there is never transference between spheres (for instance, a researcher does not lose their knack for researching just by changing the topic of their research) but the existence of such transference still needs to be demonstrated unto itself. (So to carry forward the analogy, if a researcher who’s demonstrated excellence in one field comes out with a book involving another field, but that work lacks proper citation for all major claims therein, we would be safe in assuming that an adequate transfer of pre-existing research skills to new topics had not been demonstrated.)

These troubles of course resonate with that well-known philosophical fallacy, argumentum ad hominem (argument [in reference] to the man [doing the arguing]). But to invoke this fallacy on its own is, I think, to overlook the bigger picture: the powerfully human frustration many of us share with the acts of hero-worship we as individuals and as communities reinforce every day.

One of my favourite examples of this tension lies with Paracelsus, the 16th-century physician who railed against the practice of accepting the truth of a given medical claim based on the prestige of its original author. Instead, he argued that the human body had its own store of healing power, that diseases could be identified by predictable sets of symptoms, and that personal experimentation was thus to be preferred to taking the word of someone, say, in fancy dress, boasting cures made of exotic ingredients, who had simply studied the words of ancient healers in selective institutions of learning.

But as Paracelsus became popular for his resistance to classist medical practices (since the mystification and centralizing of medical “knowledge” only really served the interests of gentleman practitioners), his own ego, in conjunction with an eagerness among many others to defer to perceived authority, meant that, even as he championed self-knowledge, Paracelsus was also quick to declare himself a monarch of medical practice, and so to gain followers in turn.

While Paracelsus’ birth name, P. A. T. Bombast von Hohenheim, is not actually the source of the term “bombastic”, Paracelsus itself means “beyond Celsius” (the Roman physician). Despite Paracelsus’ motto, seen above (alterius non sit qui suus esse potest: let no man be another’s who can be his [own instead]), such self-aggrandizement gained Paracelsus many devotees well after his death.

In essence: Whenever it garners popularity, even resistance to groupthink can generate a sort of groupthink of its own.

The 19th century played its role in glorifying this human tendency, too. Thomas Carlyle’s “Great Man” theory of history–a way of constructing cultural mythology that fixates on narratives of individual virtue and genius–still pervades our thinking so thoroughly that we tend to pluck our “heroes” from their historical and cultural contexts, or otherwise strip them from the fullness of their humanity, in order to exalt specific contributions they might have made. The potential for error here is twofold: 1) in treating any human being as perfect, or approaching perfection, due to the significance of their words and actions; and 2) in condemning entirely the work of any person who, once exalted, is thereafter found to be (shockingly) an imperfect human being.

But therein lies the difficult catch: What if someone else–or a whole community of someone-elses–has already committed the first error? What if you’re born into a culture that already exalts certain human beings as essentially without fault, either by claiming them to be virtuous directly or by downplaying all the problematic aspects of their life stories?

How can we counteract the effect of this first error, save by risking the second?

This is no idle, ivory-tower conundrum, either: Whenever we uphold the merit of an argument through the presumed impeccability of its speaker’s character, we leave ourselves open to losing that argument the first time its speaker’s character ceases to be impeccable. And yet, we cannot allow people to remain in positions of authority whose “imperfections” perpetuate serious social harm, either through word or through act. So what option remains?

More history seems to me the only answer: The more we understand and accept the fallibility of all our most notable figures, the more we can dismantle routines of hero-worship before they ever get so extreme as to require the fallacious distraction of character assassination in the first place.

Now, obviously this kind of work runs at odds with many spiritual beliefs: beliefs in living representatives of a god on earth; beliefs in a human being who is also a god; and beliefs in human beings who claim to have transcended to another plane of existence, be it through yoga, meditation, or drugs. But even most people who would consider themselves spiritual can appreciate the danger of charismatic leader-figures–the present-day godhead of Kim Jong-Un; the Stalins and Pol-Pots and Hitlers of history; the Mansons and the Joneses of smaller, still devastating cults. So there is some common ground from which to begin this conversation-shifting work.

What we now need to put on offer, as a culture, is a way of valuing significant social contributions unto themselves. When we separate those contributions from the maintenance of individual reputations, we only further benefit society by making the process of refining those contributions easier down the line. Likewise, we need to acknowledge figures of note in the most dignified way possible: by not erasing their personhood in the process. When we allow even those who contribute significantly to their communities to continue to be seen as human beings, and therefore ever-in-process, we make the path to positive social contribution seem less unattainable (and hazardous) for others.

Granted, hero-worship is an understandable cultural norm. Many of us want to be inspired by the work of human beings who’ve come before us, and want to imagine ourselves as the potential site of inspiration for others in turn. But whether our hero-worship is fixed on a record-breaking athlete, or a soldier awarded for valour, or a scientist who made a significant breakthrough that will save thousands of lives, or an activist who stood up to oppression in a way that rallied others to their cause, or a community organizer or family member who, in their own, lesser-known way made a terrific impact on our quality of life… hero-worship still sets an untenably high standard for us all.

When that athlete emerges as a perpetrator of rape, or that soldier is found to have tortured prisoners during their tour of duty, or that scientist to have plagiarized prior work, or that activist to have resorted to brutal acts against civilians in their resistance efforts, or that community organizer or family member to have molested children, we are all rightfully devastated. And yet, even then, we tend to get defensive, and our knee-jerk response is often to make excuses for the individual–as if histories of significant action can ever be reduced to stark lists of pros and cons. No, X hours of community service do not excuse the predation of Y children; and no, X impressive rescue missions do not entitle anyone to Y assaults on inmates.

But if we really want to nip such heinous rationalizations in the bud, what we need is a better social narrative for human contributions in general. Here, then, are a few suggestions as to actions we can all take to deflate the culture of hero-worship that muddies the waters of so many critical conversations. If you have others, I welcome their addition in the comments:

1) Practise making biographical assertions without using the rhetoric of relativism, even (or especially) when those biographical notes are ugly. For instance: 1) David Hume held deeply racist views about non-white persons. 2) David Hume’s racist views, and his expression of them in his writings, were commonly accepted in his culture. 3) David Hume’s writings include significant contributions to the history of philosophy. Not “BUT these views were commonly accepted” and not “BUT David Hume’s writings include”. Ask yourself, too, why such rationalizations seemed relevant in the first place.

2) Do not deny your revulsion at the destructive words and actions of your fellow human beings–not even those who have long since passed on. Do ask yourself what destructive behaviours future humans might be equally repulsed by among people of our day and age. How much do our words and actions really differ from those of past figures of note? What is the most effective way to forward a given conversation without recapitulating their errors?

3) If spiritual, put aside notions of divine inspiration when assessing the conduct and argumentation of religious leaders and historical icons. Is their conduct and argumentation impeccable (that is, distinct from the flaws we see in other human beings)? If not, ask yourself what benefit is derived from shielding these flaws under notions of divine sanction. And what are the risks?

4) If not spiritual, consider a prominent figure you find yourself defending the most in conversation. Are you defending the validity of the person’s arguments, or the person’s character (with the implication that by defending the person’s character you’re still defending the legitimacy of their arguments)? If the latter, why, and to what end? How does this forward meaningful discourse?

Hero-worship starts early, and our media culture is exceptionally good at building people past and present up to untenable standards of excellence. Once there, we often defend the reputations of these “Great People” so zealously that we limit our ability to build upon their greatest contributions, or else bind their characters and their contributions so tightly together that when the former falls, so too, in the public eye, does the relevance of the latter.

If any single, pithy adage could thus sum up the quality of discourse possible in such a culture, it might read: “Great minds discuss ideas; average minds discuss events; small minds discuss people.” Eleanor Roosevelt’s name is most often associated with this assertion, but it wouldn’t matter one whit to the quality of this statement if someone else had said it first.

…Which is a relief, because the saying has a far older, most likely anonymous provenance. So without denying the many difficult and outright ugly histories that surround our achievements, I have to ask: How many of our best works might be easier to build upon or amend if we could just get past the celebrity-status (for better or worse) of any human beings therein involved?