Conversation Enders: The Problem with Hero-Worship

Working part-time at a local bookstore is a great reprieve from the isolation of my studies. Just as I get to know many customers’ personal lives, so too have many of them learned that I’m a doctoral student working towards her (hopefully) last major proficiency exam. When they ask me what I’m reading that day, I therefore have an opportunity to frame my studies as something useful for a general audience–and sometimes this effort goes well, but at other times the real learning experience is my own.

Two weeks ago, the book of the day was Charles Darwin’s The Descent of Man (1871), a work I’d only read excerpts from in the past. When a customer asked about its relevance, I explained that this was the work in which Darwin–ever tentative about rocking the boat with his research–made explicit that human beings were subject to his theory of evolution by natural selection, too. This book caused tremendous controversy for precisely that reason, though Darwin had gone to great lengths to forestall his comments on human evolutionary behaviours until after extensive (and I mean extensive) review of the physiognomy, general behaviour, and mating pressures among various species of molluscs, fish, insects, birds, quadrupeds, and other primate species first.

Darwin received considerable criticism and ridicule for The Descent of Man (1871), which solidified the ideological “threat” first intimated in On the Origin of Species (1859), by openly integrating human development into the theory of evolution by natural selection.

But The Descent of Man has cultural significance in another capacity, too, so my synopsis for the customer included that this was also the text in which Darwin, every bit a person of his time, corrals his extensive field research on other species to make sweeping comments about the mental inferiority of women, to say nothing about the general inferiority of non-white persons. For instance:

“The chief distinction in the intellectual powers of the two sexes is shewn by man’s attaining to a higher eminence, in whatever he takes up, than can woman—whether requiring deep thought, reason, or imagination, or merely the use of the senses and hands. If two lists were made of the most eminent men and women in poetry, painting, sculpture, music (inclusive both of composition and performance), history, science, and philosophy, with half-a-dozen names under each subject, the two lists would not bear comparison. We may also infer, from the law of the deviation from averages, so well illustrated by Mr. Galton, in his work on ‘Hereditary Genius,’ that if men are capable of a decided pre-eminence over women in many subjects, the average of mental power in man must be above that of woman.”

“It seems at first sight a monstrous supposition that the jet-blackness of the negro should have been gained through sexual selection; but this view is supported by various analogies, and we know that negroes admire their own colour. With mammals, when the sexes differ in colour, the male is often black or much darker than the female; and it depends merely on the form of inheritance whether this or any other tint is transmitted to both sexes or to one alone. The resemblance to a negro in miniature of Pithecia satanas with his jet black skin, white rolling eyeballs, and hair parted on the top of the head, is almost ludicrous.”

I wouldn’t call it “enjoyable” to read such assertions–to encounter work after work (especially ones written from a position of authority, be it scientific, religious, or political) making such petty, ignorant comments at the expense of other human beings–but as a student of literary history, I find neither of these to be shocking or exceptional prejudices. They hurt, granted, but they hurt in largest part because they attest to much broader histories of exclusion and oppression. I do tend to forget, however, that many others have a different relationship with persons of note: a relationship that tends to cushion the individual from their context whenever we like something that individual did. And indeed, the customer who’d first asked about my reading was deeply troubled by my summary. “Darwin said that?” he said. “Darwin believed that?”

I tried to emphasize that Darwin’s comments did not erase his many positive contributions, but the damage was done. To try to offset these uglier aspects of Darwin’s biography, I then blundered further, by pointing out that even prominent early-20th-century suffragists, women who made great strides towards gender equality under the law, still advocated (as a great many did at the time) for eugenics policies–but this only saddened the customer further.

Now, by no means do I consider this customer’s reaction unique, but it was affecting, and I am more familiar with the other side of this flawed argument: people, that is, who will dismiss any significant contribution by a prominent individual because of some perceived failing elsewhere in their biography.

Last year, for instance, while studying for my first major exam, I made the mistake of marvelling at an historical echo: comparing, that is, John Stuart Mill’s succinct moral argument against Christianity (as found in his 1873 Autobiography, describing his childhood move from religion) with the equally succinct moral argument against Christianity used by Christopher Hitchens in more recent debate. Both regarded the notion of vicarious redemption through Christ as morally bankrupt, so the only real difference was that Hitchens could add, through a conservative estimate of the age of our species provided by modern anthropology, the absurdity of believing that a loving god watched “with folded arms” for some 95,000 years before acting to redeem the species, and even then only through barbaric sacrificial rites.

My fundamental point entailed how little had changed in these arguments–how vicarious redemption was an affront to young Mill in the early 19th century just as it was to seasoned Hitchens in the early 21st century–but my colleague interjected by shifting the conversation. This person was incredulous that I would invoke Hitchens at all, with his foreign policy views being what they were–and didn’t I know what kind of uncomfortably antiquated views he once shared about working women and motherhood?

My customer’s implicit tethering of historical significance to modern moral character, as well as my colleague’s dismissal of an argument on the basis of the speaker’s other beliefs, both rely on a fallacious connection between a person’s assertions in a given field, and that person’s actions in another. This isn’t to say that there is never transference between spheres (for instance, a researcher does not lose their knack for researching just by changing the topic of their research) but the existence of such transference still needs to be demonstrated unto itself. (So to carry forward the analogy, if a researcher who’s demonstrated excellence in one field comes out with a book involving another field, but that work lacks proper citation for all major claims therein, we would be safe in assuming that an adequate transfer of pre-existing research skills to new topics had not been demonstrated.)

These troubles of course resonate with that well-known philosophical fallacy, argumentum ad hominem (argument [in reference] to the man [doing the arguing]). But to invoke this fallacy on its own is, I think, to overlook the bigger picture: the powerfully human frustration many of us share with the acts of hero-worship we as individuals and as communities reinforce every day.

One of my favourite examples of this tension lies with Paracelsus, the 16th-century physician who railed against the practice of accepting the truth of a given medical claim based on the prestige of its original author. Instead, he argued that the human body had its own store of healing power, that diseases could be identified by predictable sets of symptoms, and that personal experimentation was thus to be preferred to taking the word of someone, say, in fancy dress, boasting cures made of exotic ingredients, who had simply studied the words of ancient healers in selective institutions of learning.

But as Paracelsus became popular for his resistance to classist medical practices (since the mystification and centralizing of medical “knowledge” only really served the interests of gentleman practitioners), his own ego, in conjunction with an eagerness among many others to defer to perceived authority, meant that, even as he championed self-knowledge, Paracelsus was also quick to declare himself a monarch of medical practice, and so to gain followers in turn.

While Paracelsus’ birth name, P. A. T. Bombast von Hohenheim, is not actually the source of the term “bombastic”, Paracelsus itself means “beyond Celsius” (the Roman physician). Despite Paracelsus’ motto, seen above (alterius non sit qui suus esse potest: let no man be another’s who can be his [own instead]), such self-aggrandizement gained Paracelsus many devotees well after his death.

In essence: Whenever it garners popularity, even resistance to groupthink can generate a sort of groupthink of its own.

The 19th century played its role in glorifying this human tendency, too. Thomas Carlyle’s “Great Man” theory of history–a way of constructing cultural mythology that fixates on narratives of individual virtue and genius–still pervades our thinking so thoroughly that we tend to pluck our “heroes” from their historical and cultural contexts, or otherwise strip them from the fullness of their humanity, in order to exalt specific contributions they might have made. The potential for error here is twofold: 1) in treating any human being as perfect, or approaching perfection, due to the significance of their words and actions; and 2) in condemning entirely the work of any person who, once exalted, is thereafter found to be (shockingly) an imperfect human being.

But therein lies the difficult catch: What if someone else–or a whole community of someone-elses–has already committed the first error? What if you’re born into a culture that already exalts certain human beings as essentially without fault, either by claiming them to be virtuous directly or by downplaying all the problematic aspects of their life stories?

How can we counteract the effect of this first error, save by risking the second?

This is no idle, ivory-tower conundrum, either: Whenever we uphold the merit of an argument through the presumed impeccability of its speaker’s character, we leave ourselves open to losing that argument the first time its speaker’s character ceases to be impeccable. And yet, we cannot allow people to remain in positions of authority whose “imperfections” perpetuate serious social harm, either through word or through act. So what option remains?

More history seems to me the only answer: The more we understand and accept the fallibility of all our most notable figures, the more we can dismantle routines of hero-worship before they ever get so extreme as to require the fallacious distraction of character assassination in the first place.

Now, obviously this kind of work runs at odds with many spiritual beliefs: beliefs in living representatives of a god on earth; beliefs in a human being who is also a god; and beliefs in human beings who claim to have transcended to another plane of existence, be it through yoga, meditation, or drugs. But even most people who would consider themselves spiritual can appreciate the danger of charismatic leader-figures–the present-day godhead of Kim Jong-Un; the Stalins and Pol-Pots and Hitlers of history; the Mansons and the Joneses of smaller, still devastating cults. So there is some common ground from which to begin this conversation-shifting work.

What we now need to put on offer, as a culture, is a way of valuing significant social contributions unto themselves. When we separate those contributions from the maintenance of individual reputations, we only further benefit society by making the process of refining those contributions easier down the line. Likewise, we need to acknowledge figures of note in the most dignified way possible: by not erasing their personhood in the process. When we allow even those who contribute significantly to their communities to continue to be seen as human beings, and therefore ever-in-process, we make the path to positive social contribution seem less unattainable (and hazardous) for others.

Granted, hero-worship is an understandable cultural norm. Many of us want to be inspired by the work of human beings who’ve come before us, and want to imagine ourselves as the potential site of inspiration for others in turn. But whether our hero-worship is fixed on a record-breaking athlete, or a soldier awarded for valour, or a scientist who made a significant breakthrough that will save thousands of lives, or an activist who stood up to oppression in a way that rallied others to their cause, or a community organizer or family member who, in their own, lesser-known way made a terrific impact on our quality of life… hero-worship still sets an untenably high standard for us all.

When that athlete emerges as a perpetrator of rape, or that soldier is found to have tortured prisoners during their tour of duty, or that scientist to have plagiarized prior work, or that activist to have resorted to brutal acts against civilians in their resistance efforts, or that community organizer or family member to have molested children, we are all rightfully devastated. And yet, even then, we tend to get defensive, and our knee-jerk response is often to make excuses for the individual–as if histories of significant action can ever be reduced to stark lists of pros and cons. No, X hours of community service do not excuse the predation of Y children; and no, X impressive rescue missions do not entitle anyone to Y assaults on inmates.

But if we really want to nip such heinous rationalizations in the bud, what we need is a better social narrative for human contributions in general. Here, then, are a few suggestions as to actions we can all take to deflate the culture of hero-worship that muddies the waters of so many critical conversations. If you have others, I welcome their addition in the comments:

1) Practise making biographical assertions without using the rhetoric of relativism, even (or especially) when those biographical notes are ugly. For instance: 1) David Hume held deeply racist views about non-white persons. 2) David Hume’s racist views, and his expression of them in his writings, were commonly accepted in his culture. 3) David Hume’s writings include significant contributions to the history of philosophy. Not “BUT these views were commonly accepted” and not “BUT David Hume’s writings include”. Ask yourself, too, why such rationalizations seemed relevant in the first place.

2) Do not deny your revulsion at the destructive words and actions of your fellow human beings–not even those who have long since passed on. Do ask yourself what destructive behaviours future humans might be equally repulsed by among people of our day and age. How much do our words and actions really differ from those of past figures of note? What is the most effective way to forward a given conversation without recapitulating their errors?

3) If spiritual, put aside notions of divine inspiration when assessing the conduct and argumentation of religious leaders and historical icons. Is their conduct and argumentation impeccable (that is, distinct from the flaws we see in other human beings)? If not, ask yourself what benefit is derived from shielding these flaws under notions of divine sanction. And what are the risks?

4) If not spiritual, consider a prominent figure you find yourself defending the most in conversation. Are you defending the validity of the person’s arguments, or the person’s character (with the implication that by defending the person’s character you’re still defending the legitimacy of their arguments)? If the latter, why, and to what end? How does this forward meaningful discourse?

Hero-worship starts early, and our media culture is exceptionally good at building people past and present up to untenable standards of excellence. Once there, we often defend the reputations of these “Great People” so zealously that we limit our ability to build upon their greatest contributions, or else bind their characters and their contributions so tightly together that when the former falls, so too, in the public eye, does the relevance of the latter.

If any single, pithy adage could thus sum up the quality of discourse possible in such a culture, it might read: “Great minds discuss ideas; average minds discuss events; small minds discuss people.” Eleanor Roosevelt’s name is most often associated with this assertion, but it wouldn’t matter one whit to the quality of this statement if someone else had said it first.

…Which is a relief, because the saying has a far older, most likely anonymous provenance. So without denying the many difficult and outright ugly histories that surround our achievements, I have to ask: How many of our best works might be easier to build upon or amend if we could just get past the celebrity-status (for better or worse) of any human beings therein involved?

On Being Human: Johansson in a Tale of Ash and Mist

Under the Skin
Jonathan Glazer
Mongrel Media / Film4 & BFI

Our fear of the unknown forms the emotional basis for many films, and our fear of the unknown in ourselves, even more. Rare is the film, though, that explores both without offering facile resolutions, and that makes constrained and intelligent use of all filmic elements to uphold its central themes.

Under the Skin lives up to its title–getting under its viewers’ skins in many ways–by being just such a rarity. The concept is fairly straightforward: Scarlett Johansson plays an alien in a human skin, whose central task seems to be the seduction of adult men, who are thereafter ensnared in an ultimately fatal process of absorption. But nothing about this situation fits tidily with notions of what it means to be, and to not be, a human being.

For one, our protagonist is no 1950s femme-fatale, delighting in the ruination of men, but rather a methodical worker (under the watch of mysterious non-human men on motorcycles) whose human mannerisms are at first only present so long as a potential target is in sight. And even the choice of target proves unsettling in its reversal of cultural stereotypes; from within her creepy white van we’re made to view isolated men of all ages as potential victims–even though, at first, we’re led to believe that our alien never takes anyone by force; and though she certainly never shows superhuman strength.

Moreover, while our alien hones her skill at this task, her indifference to everything else around her is surely meant to provoke the audience, to prompt viewers to plead with her: Be more human. Please, please, please, be more human. But even the most wrenching acts of indifference are turned, in the end, against us and our supposed humanity. In the second act, for instance, our alien picks up an isolated man and turns her now well-honed charm on him–only this time the man (played by Adam Pearson) has a severe facial deformity (neurofibromatosis), which changes entirely the significance of being treated as an object of desire by a more normatively beautiful human being.

Any viewer at this point should rightly feel uncomfortable about the tension at work on screen: If our alien treated him with the same revulsion over his external appearance that other human beings do, he wouldn’t be at risk of the same fate at her hands. But in treating him as though his external appearance should have no bearing on his fundamental worth, our alien performs a level of human equality actual human beings routinely struggle to approach.

The film revisits this tension between the tender and the brutal in our “humanity” at later, even more critical junctures, but this tension would not be possible if two things were not true throughout: For one, alien though our protagonist is, she can and does experience her environment–its sights and sounds and multitude of idiosyncrasies. Without this capacity to be impacted by her time among us, the plot could not significantly advance. And yet (for another), even as our alien shares with us this one certain trait, we have to be confronted time and again with opportunities for her to act more human in consequence (and for us to hope she acts more human in consequence) and to have each and every opportunity snatched from us in the end.

To achieve and maintain this relentless tension, Under the Skin needed to be both aesthetically striking and subtle–and it was. At one juncture, when our alien is driving through the streets of Scotland, the soundtrack takes on the breathy character of someone in a spacesuit–because, in a way, what’s mundane to us is space exploration for her. Intimations of alien physiology are likewise dealt with in delicate asides–through passing comments about heat and cold, and our alien’s attention to the smallest of insects around her–while a sequence of ash and mist becoming indistinguishable before the camera is a classically understated visual affirmation of the film’s central theme.

The sequence for male ensnarement, riffed upon three times throughout the movie, is a similarly minimalist affair that only makes the horror of the whole situation all the more startling, and grotesque. “I’m dreaming,” says the last potential victim, and when our alien agrees I realized that this whole visual metaphor was just that: the closest the human mind could come to describing an entirely alien process of predation and destruction.

The vocabulary of cinematic art being what it is, I can thus understand why certain elements (especially one dance scene) might be considered “Lynchian”, or the film compared in other ways to classics like Altered States and 2001: A Space Odyssey. But such comparisons amount to little more than irrelevant shorthand; the film more than stands on aesthetic merits all its own. In the thematic tradition of all great science fiction, Under the Skin is a slow-building exploration of our fidelity to certain notions of what “being human” means. You’ll find no easy answers here: just one hell of a nerve-wracking chance to sit with, and perhaps confront, the alien within.

A Quick Excerpt from the Morning’s Readings

Nothing like a bit of 19th-century libertarianism to start the day!

This is from The Bridgewater Treatise of chemist and geologist John Kidd, published in 1833. At this time, as I mentioned in my last essay, social prescription is very much bound up in notions of “natural” law, which of course carries the implication of divine sanction. Kidd in particular is writing about why the “poor laws” go against the law of nature because they compel the rich to pay for certain institutions of “relief”. Sound familiar?

(Warning: 19th-century writing is often dense and the syntactic rules sometimes differ, especially when it comes to the liberal use of commas.)

In the mind of the pauper, with all his challenging and all his boisterousness, there is still the latent impression, that, after all, there is a certain want of firmness about his plea. He is not altogether sure of the ground upon which he is standing; and, in spite of all that law has done to pervert his imagination, the possessory right of those against whom he prefers his demand, stares him in the face, and disturbs him not a little of that confidence wherewith a man represents and urges the demands of unquestionable justice. In spite of himself, he cannot avoid having somewhat the look and the consciousness of a poacher. And so the effect of England’s most unfortunate blunder, has been, to alienate on the one hand her rich from her poor; and on the other to debase into the very spirit and sordidness of beggary, a large and ever-increasing mass of her population. There is but one way, we can never cease to affirm, by which this grievous distemper of the body politic can be removed. And that is, by causing the law of property to harmonize with the strong and universal instincts of nature in regard to it; by making the possessory right to be at least as inviolable as the common sense of mankind would make it; and as to the poor, by utterly recalling the blunder that England made, when she turned into a matter of legal constraint, that which should ever be a matter of love and liberty, and when she aggravated tenfold the dependence and misery of the lower classes, by divorcing the cause of humanity from the willing generosities, the spontaneous and unforced sympathies of our nature.

See? Nothing changes: We have to deal with the same rhetoric today–the naive notion that if the government would just butt out, instead of compelling people of means to support a social safety net for all through taxation, charity of a personal nature would easily reassert itself to fill the gap, and the world would be in better balance.

The very next year, on the back of such grievances, the “New Poor Laws” would come into effect. Especially in their earliest conception, these would be harsh measures, essentially further penalizing the poor for being poor. You might be familiar with the outcome of these laws from such works as Dickens’ Oliver Twist and A Christmas Carol; the workhouses he describes in each were first established under this 1834 legislation.

Back to a long day of reading for me!

Greetings to the Swarm!

Hi folks!

I’ve seen a huge spike in readership on this little blog in the last few hours, thanks to the very kind and unexpected promotion by Dr. Jerry Coyne of an essay I wrote in response to a Slate.com review by Michael Robbins. Huge surges come with consequences, though, so I’d just like to make a few quick points.

1) Welcome! Thanks so much for adding my blog, and I hope some of my future posts will prove as interesting to you as this essay clearly did. I’m certainly looking forward to writing more posts about my readings now that I’ve had some success translating the relevance of 19th-century literature for a 21st-century audience–so thanks for that boost in confidence!

2) As I mentioned in my original comment, feedback is very much welcome. I’m a second-year doctoral student of English literature at Wilfrid Laurier University, studying styles of science writing in the nineteenth-century. This means that I am a scholar in process, and with any luck I’ll remain a scholar in process throughout my life. I am therefore not presenting myself as a definitive authority; just as a life-long learner interested in promoting a conversation that involves the fruits of my research to date. I will make mistakes, and I will hopefully be in a position to own up to those mistakes so as not to derail the conversation.

3) More to the point, I will always strive to maintain a courteous tone in conversation, and ask that commenters here do the same. As a student of English literature, language clearly matters to me, but that doesn’t mean I’m not going to make mistakes. To this end, I welcome all comments about rhetoric and vernacular (as well as content, of course) that are not forwarded in bad faith. In turn, I’ll try to signal any amendments I might make with clear “EDIT” markers in the original text.

4) That said, I’m in the middle of an intense reading list going into my final proficiency exam, so I cannot engage in much online discussion for the next while. I will try to post my readings and reflections here more often in the coming weeks, but I’ll have to beg patience if my participation in comment threads is inconsistent until after the end of August.

5) Again, welcome! A huge surge in reading numbers can be a little terrifying, but I look forward to many fruitful exchanges in the months to come.

Cheers and best wishes to you all.

Enough Already: The Anti-Atheist Article Shows Its Age

Michael Robbins, writing for Slate Magazine, recently contributed to that most robust literary genre, the anti-atheist op-ed, with a review of Nick Spencer’s Atheists: The Origin of the Species. “Review” might even be too strong a term for this piece; though the book is touched upon, its formal introduction is buried, and assessments of the text itself are routinely subordinated to Robbins’ own views on science and religion.

To this end, Robbins draws from what we’re sometimes left to assume are Spencer’s arguments, as well as standoffs with atheists (including those from that great bastion of critical discourse, the online comment thread), to make a broader set of claims: that today’s popular atheists are out of touch with the history of atheism; that these atheists just don’t “get” the point of religion, which is clearly all about metaphor, not explanatory power; and that if they truly “got” the history of atheism, modern atheists would understand that any secular morality is only an fragmented descendant of religious morality. For Robbins, then, Nietzsche is the ideal atheist–an atheist who felt that a world without a god is horrifying, and deserves to be mourned.

Such articles always have their easy cannon-fodder, with the likes of Dawkins or Hitchens thrown in as de facto examples of what Robbins terms “evangelical atheism” and others have termed “militant atheism”. These terms almost never appear with any sort of textual evidence (for instance, in what way “evangelical”–knocking on doors to spread the good word of atheism? and in what way “militant”–agitating for the persecution of believers?), and so serve as little more than caricatures in an already highly-caricatured debate.

Other terms in Robbins’ article are likewise, predictably heated, with “Dawkins and his ilk” identified as the “intellectually lazy” successors to Spencer’s history. This generic flogging of Dawkins should be a warning for anyone seeking insightful commentary about science and religion; it only signals for the reader that this piece is not going to concern itself so much with ideas as with the people who forward them. Robbins even ends his article with a quote lamenting the lack of such ideas-based discourse–“Everyone is talking past each other and no one seems to be elevating the conversation to where it could and should be”–without expressing any self-awareness as to how rhetoric like his keeps this conversation off-point.

And yet, such rhetoric is by no means novel: I haven’t read Spencer’s book, but Robbins references people being termed “atheist” who would more likely be regarded as theists today. He doesn’t go into great length on this point, but if his source text is indeed a good summary of the history of atheism, it should address instances throughout that history when the term was used as an insult or threat, either to demarcate people whose beliefs differed from the status quo, or to identify those who held their disbelief too strongly.

In an 1874 essay, for instance, Thomas H. Huxley notes that 17th-century Descartes–who worked extensively to rationalize the existence of a god–had been considered an atheist by the Jesuit community. Meanwhile, Huxley himself, though best known today as “Darwin’s bulldog” and a strong advocate against religious interference in scientific progress, did not identify as an atheist; rather, he sneered at those who took up that term as being too sure of themselves–just like folks who rail against the tenor of “new” atheism today.

By introducing Huxley to this discussion, I should surprise no one by adding that I write my concerns about these relentless anti-atheist pieces as a doctoral student of nineteenth-century science writing. It is from this same critical focus, as well as from my position as a human being with eyes and ears in the world, that I take issue with many of the arguments treated as self-evident truths in articles like Robbins’. Even putting aside the obvious strawman tactics, Robbins’ central arguments, drawn in part from Spencer’s text, just don’t hold historical water. I cannot comment on Spencer’s original framing, since I’m receiving his text through a powerful filter, but Robbins’ arguments are slippery enough as is. He writes near the outset:

Spencer’s point, of course, is that this received wisdom is naive nonsense—it gets the history of science and the nature of religious belief wrong, setting up an opposition between reason and faith that the church fathers would have found rather puzzling. … Few historians take this myth seriously, but it retains its hold on the vulgar atheist imagination. To believe it requires the misconception that religion exists primarily to provide explanations of natural phenomena.

Yes, the early church fathers believed that “reason” was that which brought you closer to god; that there could be no reason without god, such that the idea of dividing the two was incoherent. We have to remember that Aristotlean logic in particular dominated the Western world for almost 2000 years before other modes of evaluation gained a significant foothold; in this system an argument could be structurally valid, but its soundness still relied on the accuracy of its premises–and there were a heck of a lot “common-sense” premises in that era that we know today are not phenomenologically accurate.

(Robbins even cites one such common-sense premise later, when he presents the idea of a “universe from nothing” as a concept intrinsically meriting contemplation. Despite himself arguing that “since the very beginnings of Christianity, Basil, John Chrysostom, Gregory of Nyssa, Augustine … all assumed that God’s creation was eternal, not something that unfolded in six days or any other temporal frame”, he does not consider the possibility that a “universe from nothing” might thus be nonsensical. Certainly, recent research attests to even the “vacuum” of space being occupied, such that there’s no evidence we ever sprang up from true philosophical “nothingness” in the first place, but it’s just striking to note how pervasive such “common-sense” incoherencies remain today.)

Suffice it to say, then: Yes, “reason” today is a much more secular term, involving fewer a priori assumptions than its precursor. But what Robbins really overlooks is that this shift in meaning was a difficult transition, precisely because the Judeo-Christian god was expected to have explanatory power for natural phenomena. Medieval texts in particular are rife with this thinking–our very world a direct, macrocosmic extension of the human microcosm, with the stars overhead placed there so that we might read our destinies in them.

But at the turn of the 19th century, the earnest pursuit of natural theology–that is, the practice of evidencing the Judeo-Christian god through such self-centred studies of nature–started to lose its footing. Though late-18th and early-19th-century geologists and astronomers were careful for decades not to present their findings in such a way as would stir up public controversy about the overwhelming divergence of empirical data from Biblical record, the accumulation of so many dissenting data points could not be ignored forever.

Natural theology didn’t go down without a fight, though: At its 19th-century height, a series of (originally) eight treatises–The Bridgewater Treatises, written by gentlemen of considerable standing in science communities–were issued in the 1830s to assert the persisting explanatory power of the Bible in relation to the natural world. Nor was this whole push happening on the margins of socio-religious discourse; though Rev. Robert Chambers’ The Vestiges of the Natural History of Creation (1844) offended many clergymen for its deviance from Biblical record to account for new geological data (and early scientists for its sloppy, credulous, lay-person reporting on findings from other fields), it remained a popular work throughout the century, undergoing a dozen editions and attaining even a royal audience. The book makes appeals for the existence of a god despite all apparent evidence of absence in the natural world, but Chambers’ is a much-diminished godhead, a Deistic omniscience “behind the screen of nature”, who exists and acts in pointed contrast to the personally-involved creator believed in by so many of the day.

I should emphasize that this was all going on prior to Darwin’s On the Origin of Species (1859) and The Descent of Man (1871), the latter of which caused trouble by spelling out that, yes, evolutionary theory really did apply to human beings, too! For a while, the old age of the earth in relation to Biblical narrative could be accounted for by there being multiple periods of flood and upheaval, with the Biblical flood being just the last of this series. However, even the universality of that flood was falling apart under empirical scrutiny, and this had serious theological implications for the story of Adam and Eve–the critical touchstone on which all notions of redemption through Christ were based. If all the floods were regional, did the sin of Adam and Eve only touch a particular lineage? And later, when the theory of evolution came into the picture, when did Adam and Eve sin in this gradual progression of species?

Robbins goes on to assert that a definition of religion “must surely involve reference to a particular way of life, practices oriented toward a conception of how one should live” but then disdains the claim that religion “is a scientific theory,” “a competing explanation for facts about the universe and life” (quoting Dawkins). Indeed, Robbins sums up his opinion on that view as follows: “This is—if you’ll forgive my theological jargon—bullshit.”

It’s not simply that Robbins (and others like him; his article is merely representative of many more) is inaccurate when he presents Christianity as he then does, as an allegorical exercise throughout the ages, absent real-world interaction with empirical input. Rather, in doing so he also erases a powerful history of human struggle–among theists and atheists alike. Reading 19th-century texts as an atheist myself, I’ve always been struck by how difficult understanding one’s moral duty becomes when known facts about the natural world change so dramatically, and when the question of what your god wants of you becomes so convoluted in consequence.

For instance: 19th-century England being a hotbed of poverty and disease, works of fiction, religious pamphlets, and opinion pieces were at odds over whether state reforms to improve the lot of the most vulnerable went with or against the Christian god’s plans. Since the natural world was so full of suffering, but nothing happened that the Christian god did not will into being, maybe suffering and income disparity were meant to exist–to give some humans a chance to practise humility, and others to offer charity?

Alternately, was this god showing his condemnation of industrialized England? Was this god awaiting human action to repair what humanity had wrought? Was that the reason for the rampant spread of disease, and the difficulties disposing of so much waste? But if so, why were so many innocent children suffered to be born into this system, when their isolated circumstances rarely gave them a chance to accept Christ before their untimely deaths?

And what about all those new animals being shipped to England from all over the world–more species than could ever have fit on the Ark? What did we owe our fellow species if we were all actually part of one long chain of being? Was vivisection unjust, or were we still the divinely-chosen stewards of the Earth, as Genesis suggested, entitled to do with the rest of the world as we wished?

While Robbins insists on this separation between religion as “a conception of how one should live” and any conception of the world drawn from empirical evidence, he also argues that, since religious moral traditions have historically preceded secular moral traditions, movements like humanism are simply degenerate versions of religious belief (drawing here, in part, from political theorist and atheist John Gray). This puts Robbins in murky territory around the notion of moral education–but thankfully, this murk is easily cleared up when we move past the false dichotomy of science and religion being the only possible answers as to morality’s origins.

Simply put, even just looking at the range of Christian cultures in existence today, we see moral divergence: Some support the death penalty; some do not. Some have marriage equality; some tacitly or even overtly sanction the incarceration, torture, and murder of gay persons. Some believe in equality between the sexes; some believe in strict gender roles. Some believe in genital mutilation for children of both sexes; others for children of one; others for children of none. Some believe in using the state to provide a basic social safety net to help everyone in times of distress; some believe that charity should start–and end–in independent efforts through the church and at home.

But I don’t doubt that I could ask Christians in each and every one of these cultures where they get their sense of how one should live, and receive the same answer: “From the Bible.” Or possibly: “From Jesus.”

Similarly, evidence about the natural world is only as good as the culture that receives it. For some, the idea that certain brutal acts are present in nature is enough to suggest that we should sanction those acts in human societies; for others the brutality of nature is as good a self-serving incentive as any to build a better, safer community for all. For others still (perhaps those more attuned to suffering around them), the way the world is should simply never preclude us from trying to shape it otherwise.

Which brings us to the shape of our culture–this digital, Anglocentric, North American community in which we see time and again the popularity of articles like Robbins’: anti-atheist rhetoric by an author who nevertheless claims to want a more thoughtful discussion, a discussion in which atheists and theists are speaking directly to one another instead of over each other’s heads. But in a review that centrally castigates a caricature of modern atheism on a poorly-evidenced charge of historical ignorance, Robbins instead evades important histories of his own: histories of thoughtful theists, learned and layman alike, who over the last two millennia looked to the natural world assuming it carried literal Biblical histories both within and upon it.

Robbins and similar religious writers try to chalk up such theists to mere fundamentalists, and accuse atheists of targeting the “low-hanging fruit” of Biblical incoherence and Creationist nonsense instead of tackling “sophisticated” arguments like David Bentley Hart’s, which involves a “ground-of-all-being” god-concept: ineffable, Deistic, (still male), yet somehow of personal relevance when contemplating how best to live. But for all these attempts to place the god debate outside the world we all live in, the great bulk of Judeo-Christian history still lies with those theists who believed in a personal, present, and active creator as described in the Bible, even as both the natural world and weight of social history revealed less and less synchronicity with Biblical descriptions and prescriptions over time.

Diminishing the reality and diversity of such Biblical adherents–and thus dismissing consequent atheist concerns about how to build a better society when people still believe in this sort of god when making political and personal decisions–isn’t even “talking past each other”; it’s denying the full and profoundly human range of voices at the table. Surely we’re capable of more.

Reading Note: Robert Chambers, Vestiges of the Natural History of Creation

I usually post my doctoral reading notes on Facebook–a bizarre move on my part, since status bars really aren’t made for longer observations–but I’m going to try to publish future notes here instead. I’m just under two months from my last doctoral exam (which will be followed by an oral defense the week after), so synthesis of my studies is of the utmost importance these days.

To this end, the essence of today’s reading note is simple: Sometimes I’m struck by how quickly certain ideas have been adopted into the broader cultural consciousness in relation to others. Reading works from periods in tremendous paradigmatic flux just drives home how fragile and uneven our progress towards a better understanding of the world always is.

In 1844, well before Darwin’s tentative accumulation of facts pursuant to the emergence of diverse species, a book called Vestiges of the Natural History of Creation sparked a heated cultural debate. Not for its blatant, Eurocentric racism, which was depressingly standard to the period, but because it tried to align the weight of geological, cosmological, anthropological, and geo-linguistic discovery with Christian beliefs… an exercise that really never works out in religion’s favour. To account for the overwhelming evidence of forms emerging from other forms in increasing diversity over deep time, the book’s anonymous author (a reverend, as it later turned out) conceded that it would be inane (if not also insulting to the concept of omniscience) to posit a creator who personally acted in the creation of each life-form, instead of one who created laws for the universe–generative “expressions” of his will–and let those laws play out in accordance with a greater, unseen design.

The author’s eventual arguments around the existence of “evil” are predictably clumsy in consequence: It’s not the Judeo-Christian god who makes/allows for suffering… it’s just a collision of his various divine laws, set in motion at the beginning of time and untouched since, that necessitates the creation and destruction of so many “inferior” forms, and which has children and other innocents enduring sorrow and strife through no personal fault (rather, in consequence of others neglecting natural/divine laws–even laws that might be unknown at a given time, like the appropriate hygienic measures to take in response to plague). Oh, and then there’s something about how war is an exceptive case stemming from behaviours that are on the whole good for human advancement–as if it would be impossible for an omnipotent being to construct such laws in a way preventing heinous extremes?

To be sure, this sort of deism always yields muddled rationalizations, and in consequence, though many Christians enjoyed the text, Vestiges received heaps of scorn from many other Christians who disdained the whole argument as almost atheistic in its excuses for the natural world being so very much at odds with revealed scripture. You can see for yourself why that accusation might be forwarded, after passages like the following:

It will occur to every one, that the system here unfolded does not imply the most perfect conceivable love or regard on the part of the Deity towards his creatures. Constituted as we are, feeling how vain our efforts often are to attain happiness or avoid calamity, and knowing that much evil does unavoidably befall us from no fault of ours, we are apt to feel that this is a dreary view of the Divine economy; and before we have looked farther, we might be tempted to say, Far rather let us cling to the idea, so long received, that the Deity acts continually for special occasions, and gives such directions to the fate of each individual as he thinks meet; so that, when sorrow comes to us, we shall have at least the consolation of believing that it is imposed by a Father who loves us, and who seeks by these means to accomplish our ultimate good. Now, in the first place, if this be an untrue notion of the Deity and his ways, it can be of no real benefit to us; and, in the second, it is proper to inquire if there be necessarily in the doctrine of natural law any peculiarity calculated materially to affect our hitherto supposed relation to the Deity. It may be that while we are committed to take our chance in a natural system of undeviating operation, and are left with apparent ruthlessness to endure the consequences of every collision into which we knowingly or unknowingly come with each law of the system, there is a system of Mercy and Grace behind the screen of nature, which is to make up for all casualties endured here, and the very largeness of which is what makes these casualties a matter of indifference to God. For the existence of such a system, the actual constitution of nature is itself an argument. The reasoning may proceed thus: The system of nature assures us that benevolence is a leading principle in the divine mind. But that system is at the same time deficient in a means of making this benevolence of invariable operation. To reconcile this to the recognised character of the Deity, it is necessary to suppose that the present system is but a part of a whole, a stage in a Great Progress, and that the Redress is in reserve. (281-2)

In short: Everything we’ve discovered doesn’t really match up with Christian doctrine, but since denying the natural world would mean denying a creator anyway, the only rational move of the devoted Christian is to have faith that there is a grander design at work than meets the eye (or than is revealed in nature). Such a concession that natural theology isn’t going to offer definitive proofs of a god unto itself would have been (and indeed was) a tremendous affront to many prominent Christian thinkers of the time.

Now all this was fairly old-hat to me–the difficult, 19th-century push-pull between religion and empirical discovery–but after some 100 pages of fairly coherent documentation around recent geological discoveries, I kept stumbling over other aspects of this immensely popular text–ones that illustrated a far more nebulous understanding of the world. These included the author’s credulous assertion that one amateur scientist had made insects appear from electric current alone (a purported instance of spontaneous creation), and the claim that skin colour and skeletal structure could be changed just by moving to a different world region, and the blatant struggle to explain how macro-life-forms sometimes seemed to appear from nothing, or else adopted a different taxonomical make-up from season to season.

Put simply, the reverend believed, as did many in his time, that there was a latent capacity for life to move from “inferior” to “superior” forms under the right circumstances (and again, obviously in this analysis, “human” was the superior form in the natural world, while “Caucasian” was the superior form within the species)… and that’s when I remembered that Pasteur’s formalization of the germ theory of disease, as well as Mendel’s pea plant experiments on trait inheritance, had not yet come to pass. Though Vestiges clearly attests to a culture primed for the theory of evolution–a theory that would not be articulated in full for another 15 years, and which still isn’t properly understood by multitudes today–it also marks the uneasy end of a time period before other, just as powerful scientific discoveries: discoveries that would take far less time to be incorporated into the cultural consciousness.

A hell of a lot has happened in the last 170 years, with our knowledge of the natural world growing by leaps and bounds, but when I read works like Robert Chambers’ Vestiges, I’m reminded how much such progress is always contingent upon popular context. As Chambers himself notes:

The reception of novelties in science must ever be regulated very much by the amount of kindred or relative phenomena which the public mind already possesses and acknowledges, to which the new can be assimilated. A novelty, however true, if there be no received truths with which it can be shown in harmonious relation, has little chance of a favourable hearing. (142)

In 1844, the geological record and immense biodiversity evidenced by global exploration demanded a new understanding of the world–and the groundswell towards that monumental paradigm shift in the sciences was already clearly underway. But over a century and a half later, many ideas not even properly anticipated in popular science texts like Chambers’ (like the existence of microbial organisms, as well as trait inheritance [Chambers holds that environment matters more, and shows no inkling of the existence of even a precursor concept to genes])–have taken firm and unquestioned root even in parts of the modern world where the theory of evolution, for all its cultural priming in years prior and since, still has not.

To me, this reads as resounding testament to scientific discovery not always being enough to improve human knowledge, especially when pitted against prevailing cultural mythologies like Judeo-Christianity. This in turn leaves me a bit haunted: What other scientific advances have we already made, or stand on the brink of making, that, for want of a more palatable social narrative, still won’t gain public support for decades yet to come?

A Complicated Convergence: Gender Fluidity and Trans Advocacy

It should be a truth universally known that only an asshole would advocate for policies that limit the quality of life for their fellow human beings. Ignorance might also be an excuse for such behaviour–but only to a point. As with all things in life, we can only try to make the best possible choices with the information we have on hand–but if that information ever changes, we have a responsibility to augment our views in turn.

With this in mind, I’d like to explore an important implication of recent cultural shifts toward supporting the transition of children to the gender they self-identify as having. But I want to do this with the full, emphatic understanding that, whatever causes a child to self-identify in a way that might necessitate state and medical intervention to accommodate their transition, the ability of this child to feel safe remains tantamount. In no way whatsoever would I ever advocate for a child to be made to feel further distress because of a perceptual schism in their gender/sex identity.

What bears noting, however, is that in a culture where marketing for children’s products is heavily driven by gendered messaging (and perhaps even more so today than in other decades), prominent narratives of trans children in mainstream media uphold a rigid gender binary. In the stories of parents coming to understand their child’s gender-identity, we often see a linearity drawn between the child having preferences culturally-aligned with a differently-sexed person, and their parents using these cues to take the child’s gender-claim seriously. A child born female who identifies as gender-male might, for instance, have short-cropped hair in these articles, boast Batman paraphernalia, and be described as loving rough-and-tumble sports. Similarly, a child born male who identifies as gender-female might wear dresses, grow her hair out, and be described as showing an interest in dance.

There are many reasons why this might be the dominant approach for such stories. It might be that trans narratives like these fit best with the mainstream understanding of gender (that is, adhering to a gender binary), and so rank higher than a story, say, of a child born sex-female, who loves dresses and the colour pink and wearing long hair, yet self-identifies as gender-male. It might also be that this narrative is more comprehensible to the parents of a child with seemingly atypical preferences, who might find it easier to accept and accommodate these differences by simply regarding and reinforcing their child as belonging to the “other” gender.

But the more frequent reason in these articles is that a child just “knows” which gender they belong to–even at a very young age. And this is a messy argument, one that pointedly ignores a) how powerfully children are influenced by the media around them, b) how very much young children prefer easy and rigid categorizations (e.g. good person, bad person) and so need to be trained into a more nuanced view of the world, c) how much role play is a critical part of childhood development, and d) how ignorant children are about their own biology, let alone their performances of self within the world.

(And if anyone doubts that d) is true, try to remember the nonsensical views you or your peers held at young ages about how babies were made. As for b), noted queer sex columnist Dan Savage has a great anecdote in The Commitment about how his child, adopted at birth by Dan and Dan’s male partner, disapproved of gay marriage because a kid at kindergarden told him marriage was between a man and a woman. But really, anyone who’s been around small children should recognize this rigidity–whether it manifests in a sudden fussiness about new foods or an incessant need for simplistic reassurances about things being safe or not, good or bad. I have plenty of anecdotes just involving my eldest nephew, but I’m not going to get into them here.)

Absolutely, if a child comes to see a disconnect between their inner sense of gender and how they are identified by others, helping that child feel more comfortable with him- or herself, and advocating for a world in which that child will be safe performing whatever gender-identity they hold, is vital to the construction of a better society for all.

But when a child (or their parents) leans on a rather traditional set of gender stereotypes in order to assert this personal identity, we need to reflect on what this says about the persistence of a rigid, if also relentlessly, culturally reinforced gender binary. When we take biological components out of the picture (which we do when we talk about gender today, in contrast with “sex”), what does it even mean to be “male” or “female”? Does it mean to prefer UFC over knitting? Steak over salad? It can’t have to do with sexual preference, because very “male” persons love other very “male” persons, and very “female” persons love other very “female” persons. Does it mean conveying your love through monosyllabic grunts and indirect acts of kindness versus long conversations about feelings, hopes, and dreams? Does it mean preferring the works of Hemingway to Woolf? Tarantino to Nancy Meyers?

We know that there are some clear behavioural trends along sex-based lines, but the jury is ever-out on how much these trends would exist without cultural reinforcement. We know that communication styles and conflict-resolution styles differ widely in relation to culture, and that there are cultures in the world where male persons are regarded as masculine for performances that would seem feminine in North America–especially in relation to emotional displays and an interest in fashion. There is similarly nothing innately feminine about the colour pink; in other eras and cultures, we know the colour has just as intuitively been coded “masculine”, and might in fact be seeing a come-back among men’s fashions in our own day and age.

More to the point, though, we cannot make definitive assertions about a person’s identity based on peripheral preferences and activities. A sex-male child who loves pink dresses at the age of five, for instance, might go on to claim any number of personal identities: transgender, gay, non-trans* heterosexual with gender-normative hobbies, or maybe even non-trans heterosexual with a passion for cross-dressing. All we can safely say about the child of five is that they currently love pink dresses. And really, what is up with the adult urge to imply more?

Again, I write none of this to deny the existence of children who live with a powerful disconnect between their understanding of gender in the world, and how they perceive themselves. But we do need to acknowledge that these children are nevertheless reacting to a heavily gendered world in the first place–a binary that leaves male, female, and inter-sexed persons negotiating personal performances of identity against overwhelming gender stereotypes their whole lives through. In consequence, whenever articles about trans children reinforce narratives of linear progression between a particular hobby or clothing preference and an inner gender identity, they can simultaneously forward the cause of trans advocacy while undercutting the fluidity of human experience, and limiting the potential for performative exploration to be a perfectly normal, non-deterministic facet of our lives.

For the record, I don’t identify as having an inner gender identity–just my natal sex and the gender others read off me. I am neither trans nor cis, though I know when most people see me on an average day, they will identify me as a culturally-normative woman, on account of my female-typical anatomy, which I make no attempt to suppress through reactive forms of dress. I know many of my personal experiences as a human being are directly tethered to being perceived in this gendered way by others–from experiences of sexist language all the way up to sex-based acts of violence–but I have no deep-down conviction that I Am Woman. Nor do I believe that I Am Man. I have a body–I am a body–but to say that my inner self is female would be, to me, as incoherent as saying that my inner self is brown-haired and brown-eyed.

This said, the Don’t Be An Asshole rule inevitably advocates for helping any person who feels a painful disconnect between their inner identity and their external performance to achieve a better harmony between the two. But in the process, I can still wish for a cultural landscape that doesn’t promote such schisms in the first place: a world where children can just have their colour, activity, and fashion preferences, whatever these might be, without worrying about whether they have the right gender identity to match. On this accord, for all our happy progress towards improving the quality of life for fellow, struggling human beings, even the most excellent of current trans child success stories still suggests that we have a hell of a lot of work to do.


*I use “non-trans” in lieu of “cis” because “cis” also signals a difficult binary that doesn’t encompass the full range of gender/sex identities.

The Scientist and the Writer

Between doctoral readings, I’ve been trying to squeeze in research for a personal project. Often these secondary readings make me second-guess myself as much as my primary readings (relentlessly, that is, which I consider an important, if excruciating part of the process), but The Bonobo and the Atheist was the rare text that made me feel right on track.

Frans de Waal is a primatologist whose attention to the linguisitic skewing of primate research led to a significant (if gradual) transformation in how we view the social dynamics of our biological “next-of-kin”. For decades the primatology literature was impeded by two critical biases: The first involved the casual grouping of bonobos as a subspecies of chimp, despite significant social and physical differences between the two (socially, bonobos are matriarchal primates who primarily use social grooming and fluid sexual behaviours to maintain order, while chimpanzees are patriarchal primates with a strong predisposition to violence as a social mediator). The second involved an all-too-human tendency to read every social behaviour between members of other primate species as inherently selfish, amoral, and manipulative–language that therefore saves notions of altruism, morality, and nuanced politicking for the common good for us.

De Waal’s research, especially with bonobos, challenged this tacit division between our species and other primates. His books to date, including Chimpanzee Politics and Our Inner Ape, do exceptional work in outlining human-centric biases in many of our test parameters for “human” traits in other animals, as well as demonstrating the social fabric that other primate species–both on an individual and group level–consciously and carefully maintain.

His latest book builds on this research by exploring the emotional roots for a range of socially-beneficial behaviours. These include the seemingly gut instinct for sympathy that manifests in incredible levels of care among other primate populations, as well as acts of immediate reconciliation (or reconciliation after long absences) in the wake of conflicts that caused strife or injury. They also include a means of stabilizing group behaviours that allows juveniles to get away with almost anything, but leaves adults anxious even in the absence of an alpha when they do something that subverts the usual societal rules. Also mentioned (of course) are the displays of grief that we’ve only recently allowed ourselves to accept mark other primate behaviours when confronted with the spectre of death.

De Waal’s anecdotes from field research throughout his career are a delight to read, and his portraits of a few human beings involved in this sort of research offer similar insights into human narrowness. Early on, for instance, he describes the very sad life of one George Price:

George Price was an eccentric American chemist, who … became a population geneticist trying to solve the mystery of altruism with brilliant mathematical formulas. He had trouble solving his own problems, though. He had shown little sensitivity to others in his previous life (he abandoned his wife and daughters and was a lousy son to his aging mother), and the pendulum now swung to the other extreme. From a staunch skeptic and atheist, he turned into a devout Christian who dedicated his life to the city’s vagabonds. He gave up all of his possessions while neglecting himself. By the age of fifty, he was sinewy and gaunt like an old man, with rotting teeth and a raspy voice. In 1975, Price ended his life with a pair of scissors.

De Waal goes on to note that Price made the very common mistake of assuming that altruism and selfishness were polar opposites, when in fact (as de Waal argues for much of this book), plenty in nature seems to attest to a middle-ground: a kind of altruism that helps us as individuals by supporting the community at large, and which arises quite naturally in any species with mechanisms for parent-child bonding.

I call attention to this mediation of extremes because de Waal unfortunately does not extend this same principle to another major theme of his book. Originally I thought the very title, The Bonobo and the Atheist, referred primarily to de Waal (an atheist) in relation to his major topic of study, but when de Waal is not forwarding professional anecdotes about primate species or reflections on the religious paintings of Hieronymous Bosch (used in this volume as a kind of binding narrative thread, with varying success), he spends an inordinate and quite honestly baffling amount of time airing personal grievances about “Neo-Atheists”. I could not tell if this was simply an attempt to create a sense of tension in the work–pitting a strawman of American atheism against a defence of human spirituality, while simultaneously arguing that the preconditions for empathy and morality are already amply in evidence in other primate species without the need for supernatural awe and wonder–but regardless, suffice it to say, on this accord the book becomes a confusing mess.

From relaxed Catholic beginnings in Holland, de Waal writes of attending a god debate in America (a completely different socio-religious context) and seems to imply that questions on related matters, like whether or not morality is universal, aren’t also debated on different occasions. This, despite citing Sam Harris’s The Moral Landscape, which is reduced in summary to equivocation: Why pick on the genital mutilation of women in predominantly Islamic states when boys are circumcised right here in North America? A fair question–but also one that self-evidently acknowledges that the reach of “neo-atheist” discourse is much longer than simple existential queries.

De Waal then goes further: After cursory acknowledgement of the low status atheists have in American politics, he implies that these “neo-atheists” “sleep furiously” because of inner demons, and even asserts at one culminating juncture that he has a hunch the most outspoken are just reacting to childhood traumatization by recapitulating the same dogmatism in another form. You’d think, if this was an actual belief on his part, he would therefore have some compassion for even the most strident “neo-atheist” discourse, but he spends much of the rest of the book mocking modern atheist writers for failing to see any good in religion–even painting Alain de Botton, one of the most prominent atheist/religious accommodationists today, as only “grudgingly” giving religion a place at the table.

(And again: If I were to suggest that some group was only publicly protesting a pervasive social practice from a place of childhood trauma, I would… not take the approach that the traumatized individual just needs to embrace how hunky-dory the source of their trauma really is.)

Ultimately, de Waal compares “neo-atheists” to a group of agitators outside a screening of Titanic, loudly insisting to everyone leaving the theatre that the movie was a work of fiction, and the characters played by paid actors–as if everyone leaving the theatre didn’t already know this! As if they weren’t knowingly enjoying the movie on entirely different grounds! I have seen some damned good critiques of modern atheist movements in America, but this… this was an incredibly tone-deaf, wilful misreading of a religious context to which de Waal is a self-admitted non-native: a context in which, even if nearly half of Americans do actually understand the theory of evolution and accept the deep-time of our species and our planet, they sure as heck aren’t willing to acknowledge one of the most essential concepts in modern biology when answering survey questions about the origin of human beings.

(And don’t even get me started on the prevalent belief in demons and demon possession, concepts that yield horrible, horrible treatments of persons either simply exercising independence or suffering from mental illness.)

As a Victorian scholar, however, I also noted times when de Waal demonstrated that, scientist though he may be, interpreting the writings of historical scientific figures is not his strength. In his negotiation of T. H. Huxley on religion, I realized that the rhetorical strategies Huxley used in de Waal’s excerpting–rhetorical strategies that leap off the page to someone who routinely studies literature of that period–were taken literally, so as to assert as definitive a kind of spirituality even among 19th-century agnostics meant to deride modern North American atheists. De Waal also uses incredibly cagey wording to imply the exact opposite of Huxley’s views on animals-as-automatons: Whereas Descartes believed that humans and animals were strictly divided by the presence of free will / spirit in the former, Huxley takes readers through a series of experiments that show humans belong on the same animal spectrum–just as easily reduced to automatic action, with zero indication in any animal species that the spirit, if it exists, is anything more than a sound produced when the “bell” of the body is struck. (That is: We have no evidence of the spirit motivating action, and plenty to show that all animal action presents as environmental reactions through the material self.) De Waal, however, abbreviates all this to suggest that the existence of feelings are negated by automaton status, and that Huxley treated other animals as distinct from humans–neither a fair representation of Huxley’s argument.

Though I cannot comment on the quality of de Waal’s art criticism, it was when instances like the above started to pile up in the book that I went from shaking my head through any section not directly about de Waal’s primate research, to sitting a little taller, and reflecting on my own sense of audacity in working on a personal project attempting to bridge the science/pop-culture gap from the position of a literary historian. Certainly, many very good writers have already bridged this gap–from Stephen Jay Gould, scientist with a passion for history, to David Quammen, journalist-turned-scientific-popularizer, to Richard Holmes, Romantic-era-biographer with a flair for capturing the wonder and delight of early scientific explorations. But as of late, mired in heaps of personal and professional research, I’ve been left doubting whether I have what it takes to venture into this field, much as writing histories of science for popular audiences would be as much a joy as a privilege for me.

Reading The Bonobo and the Atheist this weekend, though, affirmed for me what I hold to be a fairly basic tenet of authority claims: Though academic training should provide you with invaluable and diversely-applicable skills for research, expertise in one area does not make you an expert in another. De Waal writes best when he sticks to his strengths, and worst not when he ventures into new terrain (new terrain should always be the goal!), but when he does so without making every effort to tread lightly and mindfully therein.

I could say the same about quite a few other prominent writers, from all manner of religious, scientific, and related cultural discourses, but my interest lies simply in learning from their literary weaknesses, and making it a priority not to make the same mistakes. New mistakes, perhaps–but that’s just part of the process, too. All the best in yours!

Story up at GigaNotoSaurus

I received a pleasant surprise this evening: notice that a story originally slated for July 2014 publication was bumped up to June 2014 publication.

“Game of Primes” is now available at GigaNotoSaurus. At its essence, it’s the story of an older brother who tries to get his younger brother out of trouble in a figuratively and literally alien environment. These brothers have very different ways of looking at the world, and what it means to be dependent on others within it, but this story privileges the older brother’s point of view, alien to many readers though it might be.

I was well and truly thrilled to find this story a home, especially with GigaNotoSaurus, and I had a wonderful time working with the very kind and helpful editorial team over the last few months. Now that the piece is published, though, two thoughts come very much to mind.

1) I have no other published works forthcoming at present, with little in submission queues. This is due in large part to current doctoral student demands (I’m working towards my second major exam, with a reading list of 70 books by mid-August), and a very difficult first third of the year, but also due to point-the-second:

2) I’m not particularly satisfied with my fiction these days. I feel I should be producing better output, but the quality just isn’t there. I’m working in part on a non-fiction project, which is certainly taking a lot of brain power, but it, too, is calling attention to many weaknesses in my writing: the syntax, the vernacular, the use of metaphor, the world-building – you name it.

Mind you, I don’t mention either point to lament this state of affairs. Rather, in the wake of my incredibly good fortune (publication), I know I’ve got some work to do. I hadn’t expected “Game of Primes” to be published so soon, but now that it has, the story crystallizes for me just how much I need to get back into creative shape, and I intend to make full use of this unexpected motivational push. Fellow writers: I hope something’s stoking the flames under your keisters these days, too!

The Cultural Specificity of Mental Health: A Response to Tragedy

I speak about gender with great reluctance on the internet. It’s strange, but despite being a storyteller I rarely feel safe talking about my experiences as a female person in the world. I don’t doubt that being gendered in the world is stressful whatever the gender, but for me (as for many sex-female individuals) other people’s gendering of my personhood interferes with my ability to feel safe and stable going about my day and doing what I love to do. I sorely wish this was not so, but since it is, I find I pick my battles with care.

Mental health, on the other hand, is something I’ve had to articulate in order to survive–especially in the last year or so, when I struggled a long while on my own before finally being heard by the right people and getting the help I needed. There are still risks to talking about mental health online, but I do hope that, if my recent experiences have given me any new skills, they include the ability to be a better advocate for others with mental health concerns. In the world of academia, where anxiety and depression rates have skyrocketed from the undergraduate level right through to post-doc and adjunct faculty positions, I feel this kind of advocacy is particularly critical.

Nonetheless, recent discourse around a tragedy in California has brought these two issues together in the public sphere. (NB: I will not name the perpetrator of this tragedy, so as to avoid directly supporting the cult of celebrity that tends to follow such mass murders.) In the last few days I’ve seen articles spring up debating notions of “toxic masculinity” and misogyny, with the routine counterpoint being that all the perpetrator’s hatred of women is merely window-dressing; that the perpetrator was “just” mentally ill, and therefore we should solely focus on mental health reforms, or otherwise consider the incident an unmanageable, unpreventable fluke.

However well-intentioned, this rhetoric neglects the extent to which mental illness and mental health are delineated by our culture. As reassuring as it might feel to mark one person as wholly “other” when they do something heinous, humanity boasts a diverse range of behavioural traits and capacities, some of which are adaptive to one’s context; some of which are not. And to make things more complicated, many of these traits exist along spectra of adaptive and maladaptive expressions, such that the emergence of adaptive traits often comes hand-in-hand with the occasional emergence of less adaptive traits from the same genetic source. (Robert Sapolsky offers a good example of this in his exploration of religiosity and schizoaffective disorder.)

For our purposes, then: One is mentally unwell when one cannot function on par with the implicit or explicit standards of one’s community for reasons pertaining to learned behaviours and biochemical responses. As one social worker I know puts it, there’s a world of difference between seeing people who aren’t there and seeing people who aren’t there to the extent that it interferes with your life. If the people you see aren’t disrupting your day-to-day interactions, then you’re still functioning up to communal par. (And indeed, the belief that spiritual entities are watching over us is quite common and mostly non-disruptive in North American cultures.)

To complicate matters even further, this notion of social functionality has also always been couched in classist, gendered, racist, and era-specific terms. I remember first hearing as a child, for instance, that the difference between a crazy person and an eccentric was the size of their bank account; in a position of wealth and power over others, excesses that wouldn’t be permitted a person “down the ladder” would be accommodated for you. Indeed, some might even say that such behaviours–antisocial though they may be–contributed to your success, and deserve to stand as exemplars for the rest of us. The financial industry in particular is a place where empathy–a trait widely considered beneficial to life in a community–proves counterproductive to the rapid accumulation of wealth. This goes a long way to explaining why the rate of psychopathy among CEOs is higher than within the regular population.

Not so many years later, I would also learn how women were once labelled “hysterical” for resisting other people’s plans for their lives, or failing to thrive within the only path (docile marriage, obliging motherhood) sanctioned at the time. There was even a time when a woman could find herself in an asylum for marital disputes or infidelity–a not-at-all uncommon use of the mental health system to control and even punish non-conformist behaviour, whether along sex-based, class-based, or ethnicity-based lines.

On the other end of the spectrum, acts of violence, properly directed, have been making “men” out of “boys” for millennia, in countless cultures. Evolutionary psychologists still haggle over what sort of values are best adapted to communal living, both when that community feels safe from attack and when it does not, but we can at least agree that there’s a pronounced manifestation of risk-taking behaviours among male persons*–both in modern studies and in histories of war, exploration, and conquest.

How much of this is innate? We might glean some insight from Dmitry Belyaev’s ongoing experiment in the domestication of silver foxes, which yielded a majority population of approachable, morphologically- and neuro-chemically transformed critters over forty years, simply by allowing only those foxes to breed that demonstrated the highest levels of a single behavioural trait: tameness. (And please do note the thorough analysis and dismissal of alternative explanations for this outcome in the above link.) But even if there is a strong biological component to our behaviours, the critical evolutionary concept overlooked in popular discourse about inheritance is the persistence of genetic diversity. In consequence, whenever we’re dealing with what’s called a “quantitative trait”–a trait like height or skin colour, which involves multiple genes and so has a wide range of possible morphologies–we have to expect a multitude of potential outcomes.

To this end, just as height and skin colour manifest along a sliding scale of genetic expression, so too will behavioural traits, like aggressiveness and passivity (which we know involve a complex interplay of genes that favour or censor certain hormones, while also impacting overall morphologies), emerge in individuals along a spectrum of extremity.

Granted, how much this spectrum of human behaviours is genetically influenced by cultural pressures is a question for more specialized thinkers than myself. For instance, E. O. Wilson goes so far as to think that there exists something called “group selection”, in which broader population pressures inform the successful perpetuation of specific traits in individuals, but this concept is wildly at odds with the findings of the majority of biologists, dozens upon dozens of whom wrote into Nature magazine to disagree with a published paper by Wilson, Tarnita, and Nowak on this issue. (Nature‘s mostly behind a paywall, but anyone with an alumni/active-student library card should be able to access these articles through their library systems.) These dissenting biologists have noted major oversights in the mathematical “proof” against kin selection, and generally hold that kin and individual selection are sufficient to explain the evolutionary benefit of traits like altruism within human communities. Since I don’t have a degree in biology myself, I therefore defer to the majority of such experts.

Nevertheless, plenty of studies show that context informs the expression of these human behaviours, however they might first arise in our genomes. One study in particular, from March 2013, looks at the relationship between inequality and risk-taking in young male persons. As Ed Hopkins reports:

While Becker et al. found that risk-taking is increasing in the equality of initial endowments, it is found here that it is increasing in the inequality of social rewards in the tournament. Further, it is shown that the poorest will be risk loving if the lowest level of status awarded is sufficiently low. Thus, the disadvantaged in society rationally engage in risky behavior when social rewards are sufficiently unequal.”

There are many studies on risk-taking behaviour and gender that I might have cited, but this one felt especially pertinent in the wake of last week’s tragedy. Having read the perpetrator’s manifesto, I know most every page is simply dripping with the young man’s perception of being of low status because he had not reaped the ultimate social reward (sexual possession of a traditionally-beautiful female person) he saw so many other male persons attaining. Worse yet, as I note below, he’s by no means the first to respond to this fear of existing at the lowest level of social status by engaging in extreme behaviour–a sad reality that positions his violent actions at a considerable remove from the popular contention that he was “just” mentally ill.

To this end, while many may believe there exists a distinct category of people capable of extreme acts, a collective of objectively “mentally ill” individuals who stand apart from the population at large, there is simply too much evidence that our propensity for extremes is informed as much by our social contexts as by neuro-hormonal states. This means that there are always two spectra at work in comprehending the presentation of violent, self- and communally-destructive behaviours: 1) the range of behavioural potentials determined by our genomes, and 2) the specific environment informing the extent to which these polygenetic networks are expressed.

Consequently, what all this recent discourse about “toxic masculinity” and misogyny attests to is not a refutation of the first spectrum. Indeed, the theory that some people are genetically predisposed to certain behavioural traits we might consider signs of mental illness explains a great deal of accumulated data to date. Rather, this conversation simply calls attention to the added existence of the second spectrum–the part of the expressive equation, that is, which we can account for, and also quite possibly modify: the extent to which a range of behavioural traits involving aggression and risk-taking for (socially-destructive) personal gain is considered normative, and therefore rewarded and amplified in our culture.

To get a sense of how widespread such behaviours are, consider the tumblr blog When Women Refuse, which has been collecting news reports involving male persons who respond to rejection by (and/or removal from) wives, girlfriends, children, and potential sexual partners with extreme violence. The range of stories presented here should unsettle persons who do not live in cultures that more overtly sanction the righteous murder of women-as-possessions, but please note that in referencing this site I am by no means suggesting that female persons do not also commit spousal murder. They certainly do, although we also know that male persons are by far in the majority for such homicides. As a recent Canadian study found:

Almost 80% of the 738 spousal killings in Canada between 2000 and 2009 were committed by men, who the study said are also responsible almost exclusively for bloody massacres where children, as well as the partner, are murdered in one act.

American statistics similarly find that, every day in the US, three female persons will be killed by their partners, and one male person will be killed by his. Beyond this, I used to be quite leery about gendered statistics for violence that didn’t involve murder; while many statistics overwhelmingly attest to female persons being assaulted by male persons, the low reporting rate even among female persons, the cultural stigma against being a male victim of assault, the laws in many police jurisdictions that inherently favour the smaller person (typically female) when responding to a domestic dispute, and personal exposure to many female instigators of domestic assault have left me with no illusions that female persons are somehow less capable of violent behaviour, up to and including destructively jealous rages.

If anything, though, the stable archetype of the furious or jealous female partner (a favourite of many TV shows and movies) is just as much a part of a culture that sanctions the belief that we ever own or are entitled to other bodies, and which sees so very many male persons narrowing their sense of value (and the value of people around them) to their ability to acquire social status through sexual conquest. And this is the culture that our recent conversation on “toxic masculinity”, amply exemplified by the recent perpetrator’s incredibly hate-filled screed against female persons for myriad perceived acts of sexual rejection, strives to address.

My point, then, is this: When we consider the notion of mental health in relation to these kinds of heinous murders, it is of vital importance that we remember how very much the conception of appropriate and inappropriate social behaviours emerges from cultural context. So long as communally-destructive status-seeking behaviours are socially sanctioned; so long as our culture idealizes almost any conduct that leads to the successful accumulation of individual power and wealth; and so long as we regard the treatment of fellow human beings as either commodities or competitors as within acceptable social parameters, we will continue to promote the expression of a set of behaviours that exist along a dangerous spectrum.

There is only one, key difference between the individual who successfully acquires status on the basis of these metrics, and the individual who values status just as much, but fails to attain it. The underlying valuation of self- and communal worth is just as sick regardless, but it’s the individual who fails to benefit from these values who stands more likely to resort to extreme behaviours in order to bridge that perceived status gap. To call one such person “just” mentally ill while staying silent about the rest of the spectrum–the adaptive majority that allows for the maladaptive minority–is to sanction further tragedies like the one we all bore witness to last Friday.

We can and must do better.


*Female persons have been shown to exhibit equally risky behaviours when performing in all-female environments, or in cultures where gender roles differ from North American standards, so the idea that male persons alone are risk-takers seems to arise from nurture rather than nature.