There are many myths within the political blogosphere, but none is so deeply troubling or so highly treasured by mainstream political bloggers than this: that the political blogosphere contains within it the whole range of respectable political opinion, and that once an issue has been thoroughly debated therein, it has had a full and fair hearing. The truth is that almost anything resembling an actual left wing has been systematically written out of the conversation within the political blogosphere, both intentionally and not, while those writing within it congratulate themselves for having answered all left-wing criticism.
That the blogosphere is a flagrantly anti-leftist space should be clear to anyone who has paid a remote amount of attention. Who, exactly, represents the left extreme in the establishment blogosphere? You’d likely hear names like Jane Hamsher or Glenn Greenwald. But these examples are instructive. Is Hamsher a socialist? A revolutionary anti-capitalist? In any historical or international context– in the context of a country that once had a robust socialist left, and in a world where there are straightforwardly socialist parties in almost every other democracy– is Hamsher particularly left-wing? Not at all. It’s only because her rhetoric is rather inflamed that she is seen as particularly far to the left. This is what makes this whole discourse/extremism conversation such a failure; there is a meticulous sorting of far right-wing rhetoric from far right-wing politics, but no similar sorting on the left. Hamsher says bad words and is mean in print, so she is a far leftist. That her politics are largely mainstream American liberalism that would have been considered moderate for much of the 20th century is immaterial.
Meanwhile, consider Tim Carney and Mark Levin. Levin has outsized, ugly rhetoric. Carney is, by all impressions, a remarkably sweet and friendly guy. But Carney, in an international and historical context, is a reactionary. Those who sort various forms of extremism differentiate Levin and Carney because Levin’s extremism is marked in language, and Carney’s extremism is marked in policy. The distinction matters to bloggy taste makers. Meanwhile, Hamsher’s extremism in language is considered proof positive of extreme left-wing policy platform. No distinction matters; genuinely left-wing politics are forbidden and as such are a piece with angry vitriol.
Greenwald, meanwhile, might very well have actually left-wing domestic policy preferences. I honestly have no idea; Greenwald blogs almost exclusively about foreign policy and privacy issues. In other words, his voice is permitted into the range of the respectable (when it is permitted at all; ask Joe Klein if Greenwald belongs at the adult table) exactly to the degree that it tracks with libertarian ideology. Someone whose domestic policy might (but might not) represent a coherent left-wing policy platform has entrance into the broader conversation precisely because that domestic policy preference remains unspoken.
I hardly even need to explain the example of Markos Moulitsas. Moulitsas is a blogging pioneer and one with a large audience. But within the establishmentarian blogosphere, the professional blogosphere of magazines, think tanks, and the DC media establishment, he amounts to an exiled figure. See how many times supposedly leftist bloggers within this establishment approvingly quote Moulitsas, compared to those who approvingly quote, say, Will Wilkinson, Ross Douthat, or John Cole. Do some of these bloggers have legitimate beef with Kos? Sure. But the fact that his blog is a no-go zone for so many publications, while bad behavior from those of different ideological persuasions is permitted, ensures that the effects of this will be asymmetrical. I believe that people have to create positive change by changing their own behavior, but I also am aware that the nominal left capitulates to demands that they know the right absolutely will not capitulate to themselves. And so the right wins, again and again.
No, the nominal left of the blogosphere is almost exclusively neoliberal. Ask for a prominent left-wing blogger and people are likely to respond with the names of Matt Yglesias, Jon Chait, Kevin Drum…. Each of them, as I understand it, believe in the general paternalistic neoliberal policy platform, where labor rights are undercut everywhere for the creation of economic growth (that 21st century deity), and then, if things go to plan, wealth is redistributed from the top to those whose earnings and quality of life have been devastated by the attack on labor. That there are deep and cogent criticisms of the analytic, moral, and predictive elements of neoliberalism is an argument for another day. That those criticisms exist, and that they emanate from a genuine left-wing position, is a point I find perfectly banal but largely undiscussed in political blogs. And that’s the problem. Whatever those bloggers are, they are not left-wing, and the fact that they are the best people can generally come up with is indicative of the great imbalance.
I don’t really know what it means to criticize a writer for holding that his own views are “the truth of man.” Obviously, I agree with my political opinions and disagree with those who disagree with me. If I didn’t agree I’d change my mind.
But one point that I agree with here, is that while I’ll cop to being a “neoliberal” I don’t acknowledge that I have critics to the “left” of me. On economic policy, here are the main things I’m trying to accomplish:
— More redistribution of money from the top to the bottom.
— A less paternalistic welfare state that puts more money directly in the hands of the recipients of social services.
— Macroeconomic stabilization policy that seriously aims for full employment.
— Curb the regulatory privileges of incumbent landowners.
— Roll back subsidies implicit in our current automobile/housing-oriented industrial policy.
— Break the licensing cartels that deny opportunity to the unskilled.
— Much greater equalization of opportunities in K-12 education.
— Reduction of the rents assembled by privileged intellectual property owners.
— Throughout the public sector, concerted reform aimed at ensuring public services are public services and not jobs programs.
— Taxation of polluters (and resource-extractors more generally) rather than current de facto subsidization of resource extraction.
Is this a “neoliberal” program? Well, this is one of these terms that was invented by its critics so I hesitate to embrace it though I recognize that the shoe fits to a considerable extent. I’d say it’s liberalism, a view recognizably derived from the thinking of JS Mill and Pigou and Keynes and Maury “Freedom Plus Groceries” Maverick and all the rest. I recognize that many people disagree with this agenda, and that many of those who disagree with it think of themselves as “to the left” of my view. But I simply deny that there are positions that are more genuinely egalitarian than my own. I really and sincerely believe that liberalism is the best way to advance the interests of the underprivileged and to make the world a better place. I offer “further left” people the (unreturned) courtesy of not questioning the sincerity of their belief that they have some better solutions, but I think they’re mistaken.
That’s hardly a comprehensive reply to everything DeBoer wrote, but I hope it’s an explanation of what the hell happened to me.
I’ll cop to a couple things. First, I’m not a left-winger. I don’t agree with the left about very much. If you’re looking for genuine left-wing thought, this is not the blog for you.
Second, I don’t spend a whole lot of time discussing left-wing thought because my interest in ideas is primarily, though not completely, in proportion to their influence on American politics. There’s room for bringing in ideas that have little or no impact at the moment, but I don’t do much of that.
One time I did argue with the left was on health care reform, where you had left-wingers making the absurd claim that the Affordable Care Act did not improve the status quo. I found this created an angry reaction and multiple accusations that I was engaged in “hippie punching” or other unfair attacks on the left. So, from my perspective, it seems like left-wingers get upset if I engage with with and upset if I ignore them. Obviously, they wouldn’t be upset if I wrote about their ideas and agreed with them, but on most issues I don’t agree with them.
The post discusses the positions of quite a few political bloggers, including Ezra Klein, Matt Yglesias, Mickey Kaus, Jon Chait, Kevin Drum, and the economic, social and career forces that contribute to the rightward pull.
And I have to say I understand that part, even thought I do not sympathize. Readers have often said I should be on certain TV shows. And logically, I should be on at least some of them. But guess what, they won’t have me (not even Democracy Now, but that’s because they are not that interested in finance, and when they do that type of story, they seem to prefer either Real People or academics). Even though a TV veteran says it has a lot to do with bookers (they are pretty much all female and he insists they prefer to book men), I suspect another big reason is my outspoken views. One ought to think that would make me a useful guest, since good talking heads TV often involves friction between participants with diverging views. But some types of divergence appear not to be terribly welcome.
I plead guilty to some general neoliberal instincts, of course, but I plead guilty with (at least) one big exception: I am very decidedly not in favor of undercutting labor rights in order to stimulate economic growth, and I’m decidedly not in favor of relying solely on the tax code to redistribute wealth from the super rich to the rest of us. What’s more, the older I get and the more obvious the devastating effects of the demise of the American labor movement become, the less neoliberal I get. The events of the past two years, in which the massed forces of capital came within a hair’s breadth of destroying the world economy, and yet, phoenix-like, have come out richer and more powerful than before, ought to have convinced nearly everyone that business interests and the rich are now almost literally out of control. After all, if the past two years haven’t done it, what would?
Now, I agree that a real left-wing – socialists, serious advocates of unionization, etc. – is not terribly well represented at least in the corners of the blogosphere that I haunt. I don’t believe, however, that this is simply due to some larger, concerted effort to ignore and marginalize the left.
First, I think that the left-wing as Freddie wants it to exist represents a very small demographic in this country. It is not surprising, then, that it is less represented in public debate and online.
Second and much more to the point, I’ve seen Freddie make this complaint before – that his arguments and positions were being written out of debate. This makes no sense to me. When we started The League, Freddie was by far the most linked-to among us. Even now that he no longer (or very rarely) blogs, his posts tend to generate links all over the place. Hell, it wasn’t long ago he got a link at The Dish for a comment he made on someone else’s blog post. This is because Freddie is a tremendous writer, and people find his arguments and ideas – and the way he presents them – compelling and interesting. He’s fun to read. And he gets all these links and responses and discussion in spite of the fact that he is a died in the wool leftist.
Indeed, so far as I can tell the greatest threat to Freddie’s ideas receiving no exposure by Very Serious People is Freddie deBoer himself. By removing himself from the debate he has contributed vastly to his own complaint. Because Freddie was getting his ideas out there and then he stopped. Maybe he was frustrated because his ideas weren’t spreading into the liberal blogosphere the way they were getting attention on many conservative and libertarian blogs. That’s fair – it certainly can be frustrating to feel as though you aren’t being taken seriously by the people who matter most. I guess I’d just suggest patience.
Actually patience might not be enough – Freddie should organize. If organized labor in this country is withering it isn’t for lack of money or political influence, it is because those who advocate for its survival are not organizing for its survival. In the age of the internet there is no reason people like Freddie aren’t creating their own publications to push their ideas to the surface. Freddie could do it, and he should. It would be far more beneficial to his cause then posts lamenting the decline of the left-wing in America.
The barrier to entry for ideas is lower than it has ever been – but those last hurdles – the Washington establishment; the Very Serious People and institutional bloggers and so forth – they can be hard to leap, no doubt about it. But I don’t think Freddie is right to stop trying.
Can anyone deny that Glenn Greenwald will never get a gig at Cato or Reason, that Digby and Matt Taibbi will never get gigs at the Atlantic (I consider GG a libertarian)? Can anyone deny that Glenn Greenwald would generate more pageviews than anyone who is at Reason or Cato, that Digby or Matt Taibbi would get more pageviews than anyone but Sully at the Atlantic?
Of course, the first rule of establishment corporate journalism is that you do not call it establishment corporate journalism. ED (for example) would like to earn living as a journalist, so it’s natural that he pooh-poohs Freddie’s point. I don’t mean to single ED out; to the contrary, the fact that he takes deBoer’s point seriously at all puts him miles above Joe Klein and James Fallows and the rest, who will always simply ignore these sorts of arguments.
They may not even know that these arguments are valid. After all, it’s hard to make a man understand something when his livelihood depends on him not understanding it.
3. One thing I’ve noticed that separates the people Freddie disapproves of from everyone else is that the ones Freddie disapproves of are primarily journalists. Journalists of policy, of ideological movements and changes, and of institutional day-to-day fighting, but liberal people whose primary career training and arc are one of journalism. A journalistic approach to politics has its strengths and its weaknesses. Its strengths are a solid understanding of the micro elements that move things forward or backwards yard-by-yard. Its weaknesses can be a form of source capture, and a myopia on what is achievable in the short run rather than what moves things in the long run. I don’t think the professionalization of bloggers as reporters has moved them rightward, but it could be argued that it has caused them to focus on the short-term, in part because what the Democrats were trying to be bill-wise required a lot of explanation and in part because journalism requires that.
In its worse form, it becomes what Jay Rosen and others call A Church of the Savvy, where access, the art of the possible, and a healthy disdain for broader scope thinking are all privileged. This is less disdain for socialist or left-wing thinking (which is disdained by all kinds of people) but disdain for outsiders, a broader and more worrisome issue than Freddie lets on.
4. It’s important to realize that the right-wing wonks Freddie seems to respect as building a long-term vision are running under different assumptions of what to do. To them, the problem isn’t thinking of a better solution to a problem, it’s arguing why there is no problem. This comes from an explicit goal to view their project as an ideological one, one that comes out of a Banfield critique that social science is necessarily ideological. This, by definition, orientates towards long-term visions of the possible.
Freddie might want to engage with a left-modified form of the Banfield critique, one that points out when you have a wonk politics hammer every problem looks like a nail. Aaron Bady noticed this with the wonkosphere’s embrace of DIY U and other producitivity related ‘solutions’ to higher ed (also googling that made me realize I stole the title of this from Aaron, sorry!). If all you know are techniques of neoliberalism, then those are the solutions you’ll naturally gravitate towards. That’s different than where Freddie goes, which is one centered around prestige and access.
5. I’ll gladly defend Ezra and Matt on the charges Freddie throws at them. Their key points they raised early over the past two years – that the Senate would become obstructionist not just at a bill level but in a “running down the clock” manner and that would have major consequences (Ezra), that the GOP would not pay a price for their obstruction as people look at their checkbooks when they vote (both) and that the Federal Reserve is a major battlefield for the recovery and progressives/liberals aren’t ready to move, even intellectually, on how to fight for it (Matt) are all major things that happened from the past two years. Ezra in particular has covered the day-to-day amazingly well with a large quantity of work meant to be accessible to a wide range of readers (I write 2 posts every other day and feel like Charles Dickens), and if Freddie’s real critique is that liberals don’t likes unions Ezra has written a lot about how the Obama administration is overlooking them.
As for Matt’s neoliberalism stuff, I read it is coming from his engagement with land use. But to make it clear, I’m in favor of a hella robust regulatory state, but I agree with large parts of his critique. If you worry about why work associated with women is denigrated to second-class work and why women are underpaid relative to men you have to look at why dental hygenists do the same work as dentists for less pay and prestige. If you worry about the carceral state, our policy of putting the maximum number of people within the criminal disciplinary net and high recidivism and subsequent lack of mobility, you have to look at that fact that it can be illegal to hire ex-cons as low-level service employees; illegal to give licenses, and thus hire, ex-cons for things like “barbering, nail technicians, cosmetology and dead animal removal.”
The Dish has always tried to remain friendly to outsider voices and distance itself from the Inside the Beltway closed conversation. In that sense, the most glaring lack in Freddie’s post is a list of who exactly we ought to be reading and engaging but aren’t. Isn’t that the obvious solution? If we’re missing worthy far-left blogospheric voices, who are they?
I made a joke the other day that I wanted to go to a TED conference and read aloud from Nietzsche’s “On Truth and Lies in a Non-Moral Sense,” and then Andrew Sullivan comes along and drops this on me. It fits what I was thinking exactly. I wonder, often, if there has been a period of greater intellectual arrogance than the one I live in. Of course there has been; there was a time before the modern critique, and, you know, Aristotle had already figured it all out. But it’s hard to keep that perspective in our current time. I have watched, with half-horror and half-bemusement, the rise of what I call we might call human achievement yuppies. They are all over: the techno-utopians, the market fetishists, the hipster teleologists, the neo-Aristotelians, “the Secret” devotees and similar cultists, the prosperity gospel evangelists, the proponents of various self-help books, the lifehackers, the starry-eyed socialists, the evolutionary optimists, the scientism proselytizers, the policy wonks, the personal virtue republicans…. Incidentally, were I in charge, I would hold the TED conferences or similar in a Brazilian favela, or village in Haiti or Somalia. I don’t think you can meaningfully come to understand human progress without understanding the depths of human misery; a consideration of the human endeavor that weighs only the progress and none of those who have been progressed upon is a work of fantasy.
This is all one of the reasons, among many, that I find the constant invective against the postmodern turn in the academy so strange. Postmodernism is not and has never been a powerful force in the world; for how could it stand against the dueling certainties and totalizing ideologies that we have never fallen out of love with? For my part, personally, I distrust those that think of practicality as a cardinal virtue, who believe our experience represents finally a series of problems to be solved, who think that efficiency is to be pursued in all elements of human achievement, who think that living is something that can be done better or worse. I’ll favor those who take as their goals to be beautiful, to be moral, and to be happy. There is no greater insult in this than the expression of personal preference. Such things are personal, and if you’ll forgive me, unspeakable.
There are many things that I call myself. The one that I think is the most accurate and the most important has always been “skeptic,” but I’ve rarely used it. I rarely use it because of what most other self-identified skeptics have made of it: when most people here of skeptics, they think of people who are deeply dismissive of the existence of Bigfoot (and isn’t that a courageous stance), but who are entirely credulous towards the power of human cognition. You might think of Penn Jillette, the living smirk, who has a massive and showy disdain for people who believe anything that fails to meet his evaluative criteria, and yet seems to apply his own ability to accurately understand the universe around him to no such scrutiny. This is the kind of skeptic that Sam Harris is: he is skeptical of competing claims of truth and accuracy, but not of his own capacity to judge, nor of the human capacity to create intellectual structures that make that judging correct. Certainly, this is what the edifice of modern skepticism represents: a skepticism that first flatters the intellect of the skeptic in question, and the human mind in general.
I’ve always felt that the kind of skepticism that is most valuable, that is to our pragmatic benefit, is the skepticism that begins the skeptical enterprise at the human mind, the classical Greek skepticism that regarded any real certainty as dogmatism. Not because it is true, or even because it is superior, but because epistemological modesty seems to me to be an entirely under appreciated tool for the practical prosecution of our lives and our arguments. You can of course read a vast array of literature making this same point, from people far smarter and better argued than I am. You can read people like Sextus Empiricus, the Buddha, David Hume, George Berkeley, Nietzsche, Jacques Derrida, Richard Rorty…. Not because they are gurus who will point you towards truth, but because what they have to say may help you along your way.
For me, I would merely put it this way: that we do not encounter the physical universe unmediated but through a consciousness mechanism and sensory inputs that seem to be the products of evolution. And the belief (however you want to define a belief) in evolution makes the idea of those consciousness and sensory mechanism being capable, no matter how long the time scale, of perfectly or non-contingently ordering the universe around us seem quite low. Evolution does not produce perfectly fit systems, it only eliminates those systems so unfit that they prevent survival and the propagation of genetic material. A chimpanzee’s intellect is a near-miracle, capable of incredible things, but it will never understand calculus. I could never and would never say this with deductive certainty, but it seems likely to me that our consciousness has similar limitations.
They tell me that the Copernican revolution and the rise of evolution have permanently altered the place of humanity in the human mind. They say that the collapse of the Ptolemaic worldview towards a vision of our planet and our sun as existing amidst a sea of stars of incomprehensible vastness has destroyed our arrogant notion that our planet is special. They tell me that evolution has destroyed any belief in divine creation and with it the notion that humanity is anything other than an animal species. And they say all of this from the position of didacticism and superiority, weaving it into a self-aggrandizing narrative about how these skeptics are the ones who are capable of looking at the uncomfortable truths of the world and not flinching.
Now, far be it from me to to diss Nietzschean perspectivism (I am, after all, on record as being an intractable opponent of the Invisible Eye), but I think Freddie overplays his hand here. Contingent minds merely undermine the necessity of our being able to comprehend the world (a necessity that the faithful take quite seriously, as an old Dominican friar once explained to me), they leave open, however, the possibility of contingent minds that “just happen” to be of the sort that can make sense of the universe in which they happen to be located. Nevertheless, Freddie is right about one thing: once we eliminate necessity, we need reasons to think that our minds are of the right sort; after all, the humble Giraffe is well adapted to its environment, but will never come to understand particle physics or the workings of its own neurophysiology. How are we to know that we are not like Giraffes, only with considerably wider possible-knowledge horizons?
A simple response is that we haven’t failed yet. The theories we build in order to explain the universe around us are remarkably, even distressingly successful. Even stranger than their success is the methodology with which we go about building them. As Christopher Norris has beautifully documented, the positivist fairy-tale of open-minded scientists accumulating measurable evidence, making conjectures based on that evidence, and then seeking to refute those conjectures does not well describe the actual way that scientists operate. In fact, the process is a good deal more deductive — the vast majority of working scientists begin by assuming scientific realism, then asking what underlying, noumenal features of the world might lead to the kind of evidence that we observe, then building a theory concerning what other kinds of evidence these noumena might produce, then seeking confirmatory and disconfirmatory evidence.
If the world were actually non-objective, or even objectively real but of a kind that was inaccessible to our contingent reason, what would be the odds of this extraordinarily arrogant and presumptive process working — not just once, but over and over again, throughout human history? If mathematics were formalist or something akin to a logical game, then why would it be the case that sets of “toy” axioms rapidly turn out to be trivial or contradictory; while the axioms that seem to best model the world churn out theory after theory of incredible richness, whilst just barely shying away from having sufficient power to prove their own consistency, thereby rendering themselves inconsistent? Finally, why on earth do our mathematical theories and our scientific theories work so eerily well together? Why does Wigner’s “unreasonable effectiveness” exist?
Let us return to the giraffes! There is no evolutionary pressure to having minds that can figure out U(1) x SU(2) x SU(3) symmetry, or why it is that the spin of an electron has to be what it is (also due to symmetry constraints). Freddie might reply that the ability to perform the kind of abstraction and symbolic thinking that is useful when figuring out how to hunt or how migration patterns work leads very naturally to the kind of abstraction required to figure out particle physics, but I think this is missing the point. The question is why fundamental physics is amenable to this kind of abstraction. Why minds of our kind happen to be in a universe of this kind. The alternative is not necessarily chaos.
I’ve occasionally been fond of saying that physics might be hopeless. Recall that a giraffe is well adapted to its environment, but will never figure out the fundamental properties of the universe. Similarly, physics could be trivial — it would be if we were supermen with superbrains.
If I understand him, Freddie’s central claim is something like this:
Freddie’s anti-dogma: If geocentrism and/or creationism are false, then there is no objective knowing.
This puzzling proposition implies that if anything is objectively known, geocentrism or creationism (or something like that) must be true! Quite the package deal. But why think theories about the location of Earth relative to other celestial bodies, or about the origins of plants and animals imply anything at all about the possibility of objective knowledge?
To imagine that the possibility of objectivity musthave something to do with the cosmic centrality or special creation of humanity is simply to accept the upshot of theologies Freddie claims skeptically to reject. He appears to believe that to hold educated opinions about astronomy and biology while refusing to accept the theologian’s conditional proposition that if we aren’t special, then we don’t know, amounts to some kind of failure of epistemic consistency. But, really, Freddie’s own skepticism seems never to have taken flight. He thinks the theologians are right about what good astronomy and biology imply. But why?
All Freddie’s alleged “last dogma” amounts to is acceptance of the mundane fact that the ability of human beings to form justified true beliefs about the external world has nothing much to do with discarded theories about our location in outer space and the origins of species.
It is true that this planet revolves around a star at the edge of the Milky Way. It is true that we humans are descended from apes. And it is true that we know all that, and a whole lot more.
A minor kvetch: Normally it’s creationists, not people who understand evolutionary theory well, that one finds using phrases like “the directionless and random process of evolution,” but I’ll assume he means something like “unguided and underdetermined.” My bigger problem is that I don’t think Freddie’s picture fully appreciates how incoherent and useless the idea of a transcendent objectivity really is. The implicit account here seems to be that, after all, we might hope we had these divine immaterial minds capable of directly apprehending truth, and then we might have a firm foundation for objective knowledge, but alas we’re stuck with these electrified meatsacks whose chief virtue was to make our grandparents relatively good at staying fed and shagging.
The thing it, this turns out to make no difference at all for the underlying epistemic problem. God or whatever other transcendent sources of certainty we might posit just serve as baffles to conceal the ineradicable circularity that’s going to sit at the bottom of any system of knowledge. You’re always ultimately going to have a process of belief formation whose reliability can only be vouchsafed in terms of the internal criteria of that very process. Calling it a divinely endowed rational faculty rather than an adaptive complex of truth-tracking modules doesn’t actually change the structure of it any.
If your background assumption or expectation is that certain and objective knowledge requires some kind of transcendent anchor, then it might look like a view where our rational faculties are naturalized cuts the tether and leaves our epistemology unmoored. This may seem like a big problem—just as someone who believes our lives are meaningful in virtue of Earth’s position at the center of the universe might think Copernicus is a big problem. But if you have a view that recognizes that the transcendent anchor wouldn’t actually do you any good, or make any epistemic difference, even if it were available, then you’re in a different boat. You’re not falling short of “objectivity” or “certainty,” because these terms have no coherent meaning except within the frame of reference provided by the brains and deductive practices we’re stuck with. If you wound the idea of transcendent objective knowing, you conclude that all we’ve got is our plural subjectivities. But if you kill it and really burn the corpse, you realize that picture of “objective knowledge” is a meaningless phantom. (Like the proverbial amplifier that goes to 11: It seems like something extra, but all you’ve done is relabeled the peak volume.) In that case, we’re still eligible for “objective knowledge” in the only sense in which the phrase was ever intelligible—which is a coherentist sense.
If this seems a little abstract, consider specifically the argument that “we do not encounter the physical universe unmediated but through a consciousness mechanism and sensory inputs.” This sounds like a limitation—like there’s an ideally clear picture of how things are, and all we’ve got is this filtered version. Except, what could it possibly mean to “encounter the physical universe unmediated”? Nothing. Well, maybe a brain hitting a rock—but if by “encounter” we mean “form representations of and beliefs about,” that has to be “mediated” in the minimum sense that some process or other correlates mind states and world states somehow. But if there really is no timeless frame of reference, then the only sense in which it’s at all coherent to talk about knowledge and certainty is internal to an epistemic system. There is nothing transcendent to lose—all we could ever have meant by “truth” or “knowledge” all along, if we were succeeding at meaning anything, was the domesticated local version. Just click your heels—you had the power to go home all along.
The collective reading comprehension of the Internet is as sharp as ever, and so I am writing a reply to some of my tired and predictable critics in the comments of my recent post on skepticism.
The most repeated and yet least defensible claim is the hoary old argument towards self-refutation. This trope is evergreen, it appears. Many commenters are taking the tack, “you are saying with certainty that you can’t have certainty!” or “you are saying without doubt that we must always have doubt!” or some such. I really have a hard time knowing how to address this failure of reading comprehension: I defy anyone, really, to find a single statement in that post that is expressed in a way that declares itself certain, lacking doubt, atemporal, non-contingent or objective. Take your time; I’ll wait. I don’t think you’re going to find anything. I am quite disciplined on this subject; I’ve done this dance before. To the point of distraction, I point out the contingent and subjective nature of my own claims, but I have to, because even having done so, you get this same old insistence that I am being certain about uncertainty. I’m not. Please, if it really is unclear from all of the verbiage that I expended on this issue: there is no position or idea that I expressed within that post that I intended as objective, certain, indubitable, atemporal, or non-contingent.
That I was so careful on that score, but that people still launched into the boring old self-refutation gambit again– and it is boring; despite the fact that so many commenters insist on thinking that they have cracked some kind of code, it is literally ancient, Plato having made a version of it– I think that reveals a tendency I see more and more on the Internet: there is a large crowd of readers and commenters who read entirely through a kind of reverse shorthand, where they take any post that vaguely resembles a post they’ve read somewhere else, and respond to it as though it were that earlier post. So John Q. Commenter says, “Aha! I remember someone once say, ‘I am certain there is no such thing as certainty,’ and boy didn’t I give it to that guy in the comments! To the Batmobile!” Well, I’m sorry folks, but you’ve got to work a little harder than that. Saying over and over that I was expressing certainty doesn’t change the fact that I intended no such thing.
[…]
To those who say that I am not disagreeing with Harris, I’m a bit confused: here I am, disagreeing with him. Harris claims that, despite uncertainty and a multiplicity of moral actions, we can make objectively moral or immoral actions or statements. I don’t believe in transcendent morality of any kind. Morality, to my lights, is best thought of as an agreement between people, which is therefore never certain, timeless, or transcendent. I think it is to our practical benefit to act as though there is no moral value that transcends limited human agreement. Which means, yes, I am incapable of saying that the Taliban is objectively or certainly of inferior moral value to the Dalai Llama. And if you’d like to haul out the high school debating team tactic, no, I can’t say that Hitler, the Holocaust or Nazism are permanently, objectively and non-contingently evil in some transcendent way.
That doesn’t mean that I don’t consider them evil, or that I can’t fight them, or that my feelings towards Nazism and the obligation to fight it are any less passionate or committed. Not at all. It merely means that I find the genesis of that opposition and that passion to be within the subjective framework of my own life. This is part of the problem again: people insist that saying, for example, that scientific truth is socially constructed represents some great insult to science, but it only would be if you maintain belief in a transcendent truth that socially constructed truth can be compared to. I don’t. From my perspective, use visions of truth are actually more respectful of science, because science is fantastically useful.
Freddie’s and Led’s challenge still warrants investigation, however, and today is a particularly fruitful day on which to consider it. Today is Great and Holy Saturday, when our thoughts are drawn to the small band of disciples who along with Mary gathered outside the tomb of Christ, waiting and hoping for the resurrection of the Lord, their presence motivated by nothing more than a promise. What Freddie and Led have nicely pointed out is that mathematics and science are based on a similar kind of promise.
I recall another Saturday, several years ago, when I was in college and trying to decide whether to take the plunge and become a math major. Late that night, I ran into an inebriated grad student who, as it happens, was writing his dissertation on non-foundational set theory. The two of us chatted, and I explained my dilemma. His first question was blunt, in the manner of mathematicians: “Are you smart enough?”
“I think so,” I replied, “it seems like most of the hopefuls get weeded out by the first class that requires them to do abstract proofs, and I have no trouble with that, so I should be fine, right?”
He smiled drunkenly and shook his head. ”No, proving things is the easy part.”
He was right of course, the difficulty of proof pales in comparison to the difficulty of stating what you wish to prove. Mathematicians since well before Hardy have been publishing paeans to proof as a creative and intuitive process, but trying to determine which questions are mathematically interesting is a far more daring act. The aesthetic and analytic faculties must operate in full concert, fueled by the belief that what seems like it should be true actually is true… and provable.
Mathematicians have struggled with these doubts ever since Gödel showed that all that is true is not provable and, more importantly, since Matiyesevich and Chaitin showed that many interesting true statements are unprovable, rather than just Gödel’s artificial corner-cases. Setting problems and working as a mathematician, however, requires a further faith — a faith in the overall coherence of mathematics and in our ability to apprehend it.
I think the proper scientific analogue is nicely raised in Max Tegmark’s excellent paper on neo-Platonism. In order to work, the physicist must believe that we do not reside within the “physics doomed” quadrant of the diagram on page 12 of that paper. The point is that physics and mathematics are both epistemologically daring activities. I’ll hasten to add that this in no way implies the truth or validity of the particularly bold prior commitments that the physicist and the mathematician hold, consciously or unconsciously. Freddie and Led have done us a service by reminding us of just how non-foundational these enterprises are. They rest on strong basic beliefs about the nature of the universe and the nature of our minds.
The inevitable response, and one that I expect to see in the comments, is that philosophers of physics and philosophers of mathematics have come up with systems within which these activities make sense even if they are divorced from Truth. Some of these systems even give explanations for the observed coherence, consistency, and success of these fields without making any appeal to correspondence with reality.
This is entirely true, and I won’t contest it, what I will say is that however successful these systems are philosophically, they are laughably out of line with the psychology of actual, practicing mathematicians and scientists. Anecdotally, I have never met a mathematician who, when asked what he does for a living, says: “I shuffle formal symbols in arbitrary patterns that are internally consistent and make sense to me.” Nor have I met a physicist who would reply: “I make tautological statements about internal questions related to the socially constructed version of reality that I’ve received.”
Questions persist, for me. I have always found and continue to find inductive or consequentialist justifications for objectivist truth frameworks kind of intuitively odd. Will Wilson’s response has met with praise, and justly so. I do want to say something, though, regarding the intellectual prowess of giraffes. Will says,
the humble Giraffe is well adapted to its environment, but will never come to understand particle physics or the workings of its own neurophysiology. How are we to know that we are not like Giraffes, only with considerably wider possible-knowledge horizons? A simple response is that we haven’t failed yet. The theories we build in order to explain the universe around us are remarkably, even distressingly successful…. Let us return to the giraffes! There is no evolutionary pressure to having minds that can figure out U(1) x SU(2) x SU(3) symmetry, or why it is that the spin of an electron has to be what it is (also due to symmetry constraints).
There’s something we need to add here, though: not only does the giraffe not know how to understand electron spin; it does not know that there is such a thing as not knowing how to understand electron spin. It’s not just that the giraffe can’t answer the question, but that its limited consciousness is incapable of realizing that such a question might be posed. What might be the case, but we can’t know, is that there are problems that we are similarly unaware of. If you’ll forgive me for invoking Donald Rumsfeld, there are known unknowns– the reconciliation of relativistic gravity with quantum mechanics; the Riemann hypothesis– but there might also be unknown unknowns, things that we don’t know we don’t know. If this were true, it would undercut what Will is saying; it shouldn’t surprise us that with time we solve the problems we apprehend, but it also shouldn’t surprise us if there are questions we aren’t even aware are questions. (You can add a “yet” to the end of that, if you’re inclined.)
Is this deductively compelling? Of course not. I don’t expect to convince anyone of anything with such a thought experiment, particularly people of a more harder nosed disposition. Such questions would have to exist to be a compelling argument against Will’s inductive attitude towards human knowledge, and of course, we won’t know them until we know them, and then we might start solving them. I’m not asking anyone to take them on faith and decide anything. I just think the question is interesting. You’d be surprised, I think, of the amount of rigor you can maintain even after you have let go of the idea that you have to prove everything to a particular level of deductive satisfaction, on the level of intellectual play.
Now, you could accuse me here of having the kind of theology-echoing considerations that I was criticizing before– for where could these questions lie if not in the human mind? (When I echoed Sartre in saying that, if everyone believed in fascism, fascism would be the truth of man, a commenter took me to mean that I thought morality was a matter of majority rule. I meant it in a more simple way than that: when people say that there would still be an anti-fascist morality that exists independent of the fact that everyone in the world supported fascism, I am wondering literally where that morality could be said to reside.) What I would say (and, trust me, this is all conjectural) is that the questions that we don’t know we aren’t asking wouldn’t exist until we discover them, but that the possibility that they could be discovered would be enough to trouble Will’s point. If this is confusing to you, you’re not alone, and I’d love to hear ideas in the comments.
Freddie’s post is filled with pleas for understanding and reading in good faith. Freddie would I think respond by saying that for him, taking the notion that we can understand each other and we can self-disclose in not totally arbitrary and (at least) somewhat meaningful ways is a useful way to proceed.
But if that is his response (and again I’m guessing here, he would know better than I), then why wouldn’t a commenter who by Freddie’s lights is a “bad” reader, simply respond by saying something to the effect of:
“What you Freddie call reading/commenting in bad faith is what I take to be a useful way of proceeding along with apparently quite a few others, given your response to the comments.”
Just to be clear, I agree with Freddie that many of his responders rather ignorantly misread him (at best) or at worst simply had their pre-arraigned views that they simply fired at him.
I don’t think it fair to call all forms of pluralism relativism. Pluralism can be true pluralism with humility, some skepticism but nevertheless ability to make choices and stand for them in the world. I think Freddie represents this position quite articulately. It’s not in the end my position, but I can appreciate those who hold it genuinely as I believe Freddie does.
Still I don’t think Freddie has grounds to ask for better reading/commenting from his interlocutors given the admittedly subjetivist orientation of his position. He certainly wants to have such a grounds for his criticism, but I don’t know where that ground is located from within his worldview. I think he’s wanting to have it both ways and I’m not sure he can legitimately do so and still hold true to his position.
But then again you already should have guessed that given that I said I don’t share his view.
Morality and science operate in very different ways. In science, our judgments are ultimately grounded in data; when it comes to values we have no such recourse. If I believe in the Big Bang model and you believe in the Steady State cosmology, I can point to the successful predictions of the cosmic background radiation, light element nucleosynthesis, evolution of large-scale structure, and so on. Eventually you would either agree or be relegated to crackpot status. But what if I believe that the highest moral good is to be found in the autonomy of the individual, while you believe that the highest good is to maximize the utility of some societal group? What are the data we can point to in order to adjudicate this disagreement? We might use empirical means to measure whether one preference or the other leads to systems that give people more successful lives on some particular scale — but that’s presuming the answer, not deriving it. Who decides what is a successful life? It’s ultimately a personal choice, not an objective truth to be found simply by looking closely at the world. How are we to balance individual rights against the collective good? You can do all the experiments you like and never find an answer to that question.
Harris is doing exactly what Hume warned against, in a move that is at least as old as Plato: he’s noticing that most people are, as a matter of empirical fact, more concerned about the fate of primates than the fate of insects, and taking that as evidence that we ought to be more concerned about them; that it is morally correct to have those feelings. But that’s a non sequitur. After all, not everyone is all that concerned about the happiness and suffering of primates, or even of other human beings; some people take pleasure in torturing them. And even if they didn’t, again, so what? We are simply stating facts about how human beings feel, from which we have no warrant whatsoever to conclude things about how they should feel.
Attempts to derive ought from is are like attempts to reach an odd number by adding together even numbers. If someone claims that they’ve done it, you don’t have to check their math; you know that they’ve made a mistake. Or, to choose a different mathematical analogy, any particular judgment about right and wrong is like Euclid’s parallel postulate in geometry; there is not a unique choice that is compatible with the other axioms, and different choices could in principle give different interesting moral philosophies.
A big part of the temptation to insist that moral judgments are objectively true is that we would like to have justification for arguing against what we see as moral outrages when they occur. But there’s no reason why we can’t be judgmental and firm in our personal convictions, even if we are honest that those convictions don’t have the same status as objective laws of nature. In the real world, when we disagree with someone else’s moral judgments, we try to persuade them to see things our way; if that fails, we may (as a society) resort to more dramatic measures like throwing them in jail. But our ability to persuade others that they are being immoral is completely unaffected — and indeed, may even be hindered — by pretending that our version of morality is objectively true. In the end, we will always be appealing to their own moral senses, which may or may not coincide with ours.
The unfortunate part of this is that Harris says a lot of true and interesting things, and threatens to undermine the power of his argument by insisting on the objectivity of moral judgments. There are not objective moral truths (where “objective” means “existing independently of human invention”), but there are real human beings with complex sets of preferences. What we call “morality” is an outgrowth of the interplay of those preferences with the world around us, and in particular with other human beings. The project of moral philosophy is to make sense of our preferences, to try to make them logically consistent, to reconcile them with the preferences of others and the realities of our environments, and to discover how to fulfill them most efficiently. Science can be extremely helpful, even crucial, in that task. We live in a universe governed by natural laws, and it makes all the sense in the world to think that a clear understanding of those laws will be useful in helping us live our lives — for example, when it comes to abortion or gay marriage. When Harris talks about how people can reach different states of happiness, or how societies can become more successful, the relevance of science to these goals is absolutely real and worth stressing.
Which is why it’s a shame to get the whole thing off on the wrong foot by insisting that values are simply a particular version of empirical facts. When people share values, facts can be very helpful to them in advancing their goals. But when they don’t share values, there’s no way to show that one of the parties is “objectively wrong.” And when you start thinking that there is, a whole set of dangerous mistakes begins to threaten. It’s okay to admit that values can’t be derived from facts — science is great, but it’s not the only thing in the world.
I wonder how Carroll would react if I breezily dismissed his physics with a reference to something Robert Oppenheimer once wrote, on the assumption that it was now an unmovable object around which all future human thought must flow. Happily, that’s not how physics works. But neither is it how philosophy works. Frankly, it’s not how anything that works, works.
Carroll appears to be confused about the foundations of human knowledge. For instance, he clearly misunderstands the relationship between scientific truth and scientific consensus. He imagines that scientific consensus signifies the existence of scientific truth (while scientific controversy just means that there is more work to be done). And yet, he takes moral controversy to mean that there is no such thing as moral truth (while moral consensus just means that people are deeply conditioned for certain preferences). This is a double standard that I pointed out in my talk, and it clearly rigs the game against moral truth. The deeper issue, however, is that truth has nothing, in principle, to do with consensus: It is, after all, quite possible for everyone to be wrong, or for one lone person to be right. Consensus is surely a guide to discovering what is going on in the world, but that is all that it is. Its presence or absence in no way constrains what may or may not be true.
Strangely, Carroll also imagines that there is greater consensus about scientific truth than about moral truth. Taking humanity as a whole, I am quite certain that he is mistaken about this. There is no question that there is a greater consensus that cruelty is generally wrong (a common moral intuition) than that the passage of time varies with velocity (special relativity) or that humans and lobsters share an ancestor (evolution). Needless to say, I’m not inclined to make too much of this consensus, but it is worth noting that scientists like Carroll imagine far more moral diversity than actually exists. While certain people believe some very weird things about morality, principles like the Golden Rule are very well subscribed. If we wanted to ground the epistemology of science on democratic principles, as Carroll suggests we might, the science of morality would have an impressive head start over the science of physics. [1]
The real problem, however, is that critics like Carroll think that there is no deep intellectual or moral issue here to worry about. Carroll encourages us to just admit that a universal conception of human values is a pipe dream. Thereafter, those of us who want to make life on earth better, or at least not worse, can happily collaborate, knowing all the while that we are seeking to further our merely provincial, culturally constructed notions of moral goodness. Once we have our values in hand, and cease to worry about their relationship to the Truth, science can help us get what we want out of life.
There are many things wrong with this approach. The deepest problem is that it strikes me as patently mistaken about the nature of reality and about what we can reasonably mean by words like “good,” “bad,” “right,” and “wrong.” In fact, I believe that we can know, through reason alone, that consciousness is the only intelligible domain of value. What’s the alternative? Imagine some genius comes forward and says, “I have found a source of value/morality that has absolutely nothing to do with the (actual or potential) experience of conscious beings.” Take a moment to think about what this claim actually means. Here’s the problem: whatever this person has found cannot, by definition, be of interest to anyone (in this life or in any other). Put this thing in a box, and what you have in that box is—again, by definition—the least interesting thing in the universe.
So how much time should we spend worrying about such a transcendent source of value? I think the time I will spend typing this sentence is already far too much. All other notions of value will bear some relationship to the actual or potential experience of conscious beings. So my claim that consciousness is the basis of values does not appear to me to be an arbitrary starting point.
At bottom, the issue is this: there exist real moral questions that no amount of empirical research alone will help us solve. If you think that it’s immoral to eat meat, and I think it’s perfectly okay, neither one of us is making a mistake, in the sense that Fred Hoyle was making a mistake when he believed that conditions in the universe have been essentially unchanging over time. We’re just starting from different premises.
The crucial point is that the difference between sets of incompatible moral assumptions is not analogous to the difference between believing in the Big Bang vs. believing in the Steady State model; but it is analogous to believing in science vs. being a radical epistemological skeptic who claims not to trust their sense data. In the cosmological-models case, we trust that we agree on the underlying norms of science and together we form a functioning community; in the epistemological case, we don’t agree on the underlying assumptions, and we have to hope to agree to disagree and work out social structures that let us live together in peace. None of which means that those of us who do share common moral assumptions shouldn’t set about the hard work of articulating those assumptions and figuring out how to maximize their realization, a project of which science is undoubtedly going to be an important part. Which is what we should be talking about all along.
The second point I wanted to mention was the justification we might have for passing moral judgments over others. Not to be uncharitable, but it seems that the biggest motivation most people have for insisting that morals can be grounded in facts is that they want it to be true — because if it’s not true, how can we say the Taliban are bad people?
That’s easy: the same way I can say radical epistemological skepticism is wrong. Even if there is no metaphysically certain grounding from which I can rationally argue with a hard-core skeptic or a Taliban supporter, nothing stops me from using the fundamental assumptions that I do accept, and acting accordingly. There is a weird sort of backwards-logic that gets deployed at this juncture: “if you don’t believe that morals are objectively true, you can’t condemn the morality of the Taliban.” Why not? Watch me: “the morality of the Taliban is loathsome and should be resisted.” See? I did it!
The only difference is that I can only present logical reasons to support that conclusion to other members of my morality community who proceed from similar assumptions. For people who don’t, I can’t prove that the Taliban is immoral. But so what? What exactly is the advantage of being in possession of a rigorous empirical argument that the Taliban is immoral? Does anyone think they will be persuaded? How we actually act in the world in the face of things we perceive to be immoral seems to depend in absolutely no way on whether I pretend that morality is grounded in facts about Nature. (Of course there exist people who will argue that the Taliban should be left alone because we shouldn’t pass our parochial Western judgment on their way of life — and I disagree with those people, because we clearly do not share underlying moral assumptions.)
Needless to say, it doesn’t matter what the advantage of a hypothetical objective morality would be — even if the world would be a better place if morals were objective, that doesn’t make it true. That’s the most disappointing part of the whole discussion, to see people purportedly devoted to reason try to concoct arguments in favor of a state of affairs because they want it to be true, rather than because it is.
Harris does have some thoughtful things to say in this lecture. He makes a strong case for moral reasoning (though I would say it conflicts with his main thesis that morality is empirical.) And he makes the important point that we do have the right to judge other people’s moral practices, such as shame killings. But ultimately his animus for religion drives him to illogical conclusions, such as the notion that what the Taliban lacks is sufficient science about “human flourishing” to make good moral choices1. By this standard, how much less moral must the ancient Greeks have been, who knew so much less science than the medieval Arabs. And how much more immoral the nomadic tribes that preceded them thoughout Africa and Asia minor. How immoral that first human couple must have been, in their African Eden!
What this hyper-utilitarianism (which marries Mill with the logical positivists) primarily accomplishes is this. It obviates the need to look reflectively at evil. When evil can be equated with a simple paucity of learning, like deficiency of a vitamin, there is no need to look within our own hearts for its seeds. All the world’s darkness can be projected outwards onto people we couldn’t have less in common with. We don’t want to subjegate women; we don’t want to tyrannize innocents; we don’t want to convert the whole globe to our ethos, and exterminate those who resist (wait, scratch that last one.) Unfortunately it is far more likely that the opposite is true. Not even all the science of John Faustus can make us good, if we won’t season it with introspection.
But despite often justifiable skepticism about the process, Oscar nominations—one of which, of course, went to Bridges—can’t be bought. Not exactly, anyway. There is a reason why they call the run-up period to the Academy Awards the “Oscar campaign.” It is, to use a familiar analogy, like an election, with an electorate of 5,777 people (the size of McKenzie County, North Dakota), unwilling to be influenced by anything but their own opinions, yet still, perhaps, more swayable than they’d like to admit. There is no war room, per se, but there are early front-runners that fade, grassroots insurgencies, even primaries. Ultimately, most of the nominees emerge from a combination of good planning, good movies, and good luck: Crazy Heart’s distributor, Fox Searchlight, had the smarts first to acquire the film in July, and then, when it sensed an opening in this year’s Best Actor field, to accelerate its release from the spring of 2010 to December. The gambit was shrewd; writer-director Scott Cooper’s small-scale debut, in which Bridges plays a country singer seeking redemption, opened to strong reviews just as some of Bridges’s potential competitors (Nine’s Daniel Day-Lewis and The Lovely Bones’ Mark Wahlberg) were cratering with critics.
That Bridges gives a beautifully lived-in, eminently praiseworthy performance in Crazy Heart wasn’t just important; it was crucial. But it’s also not enough. And that’s where an Oscar campaign comes in. No effort or expense can make any of the Academy’s members vote for an actor, director, or screenplay they don’t like. But what a smart Oscar campaign (like a successful political campaign) can do is to make someone or something part of a larger story, and Crazy Heart had a good one to tell: Bridges, a well-liked frequent nominee who just turned 60, has never won an Oscar. In other words, it’s his turn.
A good Oscar narrative makes voters feel that, by writing a name on a ballot, they’re completing a satisfying plotline. Only a few of these stories are effective, and every campaign season, movies scramble to own them. The best are reused year after year: for example, The Little Movie That Could, the tale of a low-budget indie, a David among studio Goliaths, that often appeals to voters who hate Hollywood’s bigger-is-better aesthetic. Searchlight (which, as an arm of 20th Century Fox, hardly qualifies as an underdog) has worked this for years—first with Little Miss Sunshine, then Juno, then last year’s Slumdog Millionaire. This year, however, that story line was grabbed early by Lionsgate for Lee Daniels’s Precious, which also claimed an appealing narrative for acting contenders, The Cinderella Story, thanks to its first-time star, Gabourey Sidibe.
It’s no accident that all the movies that lead this year’s newly expanded ten-film Best Picture race have seized other, equally useful Oscar-season story lines. There’s always a tussle over The Movie That Speaks to This Moment: Last year, Milk had it from the day Proposition 8 passed in California, and this year, Jason Reitman’s layoff-era tragicomedy Up in the Air edged out both The Hurt Locker (Look! Finally, an Iraq War movie that works!) and Avatar (Look! It’s antiwar and pro-environment!) for that label. Each of those movies also boasts an Oscar narrative: Kathryn Bigelow could be the first woman to win the Best Director Oscar, giving The Hurt Locker The Chance to Make History, while Avatar gets to be The Big Gamble That Paid Off as well as, of course, The Popular Favorite.
But enough with the cahiers du cinéma. Who’s going to win Best Picture? Among Oscar touts, the consensus is that it’ll be one of the two top nomination-garnerers, with “Avatar” the heavy favorite. Brandon Gray, at boxofficemojo.com, writes that “good box office has historically been key to winning Best Picture, which usually goes to the movie with the first or second highest gross among the nominees: that would favor ‘Avatar’ over ‘The Hurt Locker.’ ” Given that the latter’s gross is the second lowest among the ten nominees, amounting to less than one per cent of the former’s, you can say that again.
Even so, there is a distinct possibility of an upset. To understand why requires drilling down into the mechanics of voting systems. It’ll only hurt for a minute. From 1946 until last year, the voting worked the way Americans are most familiar with. Five pictures were nominated. If you were a member of the Academy, you put an “X” next to the name of your favorite. The picture with the most votes won. Nice and simple, though it did mean that a movie could win even if a solid majority of the eligible voters—in theory, as many as seventy-nine per cent of them—didn’t like it. Those legendary PricewaterhouseCoopers accountants don’t release the totals, but this or something like it has to have happened in the past, probably many times.
This year, the Best Picture list was expanded, partly to make sure that at least a couple of blockbusters would be on it. (The biggest grosser of 2008, “The Dark Knight,” was one of the better Batman adventures, but it didn’t make the cut.) To forestall a victory for some cinematic George Wallace or Ross Perot, the Academy switched to a different system. Members—there are around fifty-eight hundred of them—are being asked to rank their choices from one to ten. In the unlikely event that a picture gets an outright majority of first-choice votes, the counting’s over. If not, the last-place finisher is dropped and its voters’ second choices are distributed among the movies still in the running. If there’s still no majority, the second-to-last-place finisher gets eliminated, and its voters’ second (or third) choices are counted. And so on, until one of the nominees goes over fifty per cent.
This scheme, known as preference voting or instant-runoff voting, doesn’t necessarily get you the movie (or the candidate) with the most committed supporters, but it does get you a winner that a majority can at least countenance. It favors consensus. Now here’s why it may also favor “The Hurt Locker.” A lot of people like “Avatar,” obviously, but a lot don’t—too cold, too formulaic, too computerized, too derivative. (Remember “Dances with Wolves”? “Jurassic Park”? Everything by Hayao Miyazaki?) “Avatar” is polarizing. So is James Cameron. He may have fattened the bank accounts of a sizable bloc of Academy members—some three thousand people drew “Avatar” paychecks—but that doesn’t mean that they all long to recrown him king of the world. (As he has admitted, his people skills aren’t the best.) These factors could push “Avatar” toward the bottom of many a ranked-choice ballot.
I’m rooting for a Hurt Locker win as much as anyone this side of Jeremy Renner. I hated Avatar from top to bottom, beginning to end. If it wins, the industry will only have ratified Cameron’s cynical conceit that dialogue, spontaneity, individual performances, narrative ingenuity, and pretty much every other cinematic virtue may be sacrificed without cost on the altar of CGI thaumaturgy. But I still find this particular upset—if it can be called that now—hard to envision. (I am on board with the conventional wisdom, though, that Best Director is Bigelow’s to lose.)
Avatar fed a lot of mouths in Hollywood this winter, and it was the prohibitive favorite for a good long while. Cameron, moreover, has been viewed as a game-changing cinematic visionary ever since his billion-dollar Oscar boat Titanic. (For a sense of how Hollywood kowtows to him, take a look at this Vulture piece revealing that the Academy abruptly disinvited Sacha Baron Cohen from presenting at the weekend’s ceremony out of fear that his planned Avatar sketch with Ben Stiller might offend the famously thin-skinned director.) The Hurt Locker, by contrast, was seen by almost no one (its $12.7 million domestic gross was less than one-fiftieth of Avatar’s) and, until its recent run, was barely on the radar as a serious contender to win. (Jason Reitman’s Up in the Air was initially perceived as the strongest challenger to Avatar’s award-season hegemony.)
Still more relevant to this year’s race, though, may be the fact that, like many large, hidebound institutions, the Academy is often a step behind, fighting yesterday’s war today. Had it had the courage to give Brokeback Mountain the nod over Crash in 2005, to cite one example, it might not have felt the need to advertise its enlightenment by crowning Milk’s Sean Penn in 2008 over the feel-good, comeback, sure-to-give-an-awesome-speech, and overwhelmingly deserving Mickey Rourke of The Wrestler. The problem for The Hurt Locker—and any other non-billion-dollar earner this year—is that much as the Academy was criticized after Brokeback for being afraid to reward a “gay” film, following last year’s awards it was chastised for being biased against commercially successful films. Indeed, the Academy took the complaint so much to heart that it tossed aside 65 years of practice and expanded the Best Picture field to ten specifically to ensure that some crowd-pleasers made the cut. Alas, the evidence for the whole the-Academy-hates-blockbusters complaint derived entirely from 2008, when critically acclaimed megahits Wall-E and The Dark Knight were passed over for more modest, high-minded fare. And, as theories based on small sample sizes so often are, the claim was, on its face, completely ludicrous.
It’s true that the consensus-based voting scheme also gives more hope to dark horses like “Inglourious Basterds” and “Up in the Air,” but “Basterds” is probably as polarizing as “Avatar” in its way, and “Up in the Air” has been suffering from a (largely undeserved) backlash for months now. Whereas “Hurt Locker” doesn’t seem to have any real enemies at all — apart from accuracy-minded veteran’s groups, at least, and they don’t get a vote.So I’m picking Kathryn Bigelow’s war movie to win it, and so should you. As for what should win — well, if I had a vote to cast right now, I’d give it to “Inglourious Basterds” instead. It’s a weird choice, I’ll grant you, and I’ll probably change my mind tomorrow. But for now, there it is …
I’m late with these because I’ve been incredibly busy. You can check out an in-depth discussion of the Oscars over at The League between Will, Freddie and I if you’d like. But right now I’m going to give you my head/heart Oscar picks for the major categories.
Best Picture
Head says: The Hurt Locker. It’s basically a toss-up between Avatar and The Hurt Locker — as recently as last week I was saying Avatar would win – but the momentum has all gone toward Kathryn Bigelow’s war picture in the last few weeks. I think it’ll pull off the David vs. Goliath upset.
Heart says: A Serious Man. I think it’s the best picture that the Coens have made since The Big Lebowski. Perfectly pitched black humor; a metaphysical sensibility that works just right; spot-on performances from every single cast member. Like I said: Their best work since The Big Lebowski, easy.
Best Director:
Head says: Kathryn Bigelow will become the first woman to win the best directing Oscar…
Heart says: …and she’ll deserve it. Just a virtuoso performance from the director of Point Break. The slowly mounting tension, the spinning cameras, the feel of dirt and grit and grime. When I talked to her last year, she said that during the set-pieces she’d have several different camera crews — entire crews, not just cameramen — working simultaneously so she could put together shots from the same chaotic setting. Just brilliant work on her end.
Best Actor:
Head says: Jeff Bridges. I thought the work was a little cliched, but the Academy loves the actor and it’s something of a lifetime achievement award.
Heart says: George Clooney. I guess. I wasn’t terribly in love with any of the performances. Jeremy Renner’s pretty good as well. I would have gone with Michael Stuhlbarg in A Serious Man if he had been nominated. A masterfully understated turn.
Best Actress:
Head says: Sandra Bullock. Again, kind of a lifetime achievement award.
Heart says: Carey Mulligan. Loved her in An Education, which was probably a little overrated. Still, she was excellent in it.
Best Supporting Actor and Actress:
Head and Heart both say: Christoph Waltz (Inglourious Basterds) and Mo’Nique (Precious) both will win and deserve to.
An atheist convention! A bunch of people sitting around not being religious! People brought together by their absence of belief in something! Spending money to hear speakers talk to them about how they can better be not-something and not-believe in the not-deity! Several fun-filled days thinking about God because you don’t believe in him and think he’s a jerk! What could it possibly matter to me if my neighbors go to church? What could I possibly feel towards them because of what I don’t feel? How could a genuine atheism compel one towards anger or bitterness? No, what anger exists is anger at the God you say you don’t believe in.
No. Atheism is not a project.It has no purpose. It proceeds towards no end. It has no meaning beyond the simplicity of absence. It has as little negative presence as positive and demands no philosophy. Sam Harris’s life is dominated by religion. It’s what he thinks about; it’s what he writes about; it’s how he pays the bills. He speaks all over the country about religion, he opines on it constantly, denying it is his constant endeavor. His intellectual and philosophical life could hardly be more centered around religion if he were a monk.
Me? I go weeks without thinking about religion or God. And why would I?
With the important qualification that I do spend quite a bit of time pondering the implications of religious belief (to start with, there’s that whole rise of militant Islam business to think about), I have some sympathy for what Freddie is saying, even if I suspect that many of those who have taken the trouble to define themselves as atheists have already spent far more time on this topic than it deserves.
Freddie at L’Hote thinks atheists, despite being right, should sit down and shut up:
[…]
My first response is simply to observe that if you go weeks without thinking about religion or God, I’m curious where you live and who your associates are. Religion interjects itself into my life quite frequently.
Regarding your recent post of one atheist claiming bemusement (and, if I’m reading him right, some annoyance) at the apparent contradiction between being an atheist and spending much of your time involved in religion, I must say that I find it a little surprising to see this classic accusation of dishonesty coming from an atheist.
The post is startling in how well written it is as compared to how childishly bad his reasoning is. Apparently, once you don’t believe in a deity, any and all earthly concerns about the real, observable effects of religion in the world we all share become irrelevant.
Since Harris does not believe in a god he should not concern himself over the trifling matter of jihadists flying planes into buildings. Since Hitchens is an atheist the murder of teenage girls at the hands of their fundamentalist fathers, brothers and uncles should be of no concern to him. How indifference towards religion should follow from non-belief in religion is not explained, probably because you can’t get there from here.
Later in the post he makes the almost as ridiculous claim that though of course there are people who would like to force their religious views on the rest of us and this must be fought against (gee, I forget, who are the strongest voices against this sort of thing….Sam something, Christopher someone else) the underlying truth of the religious claims on which policies are formed is irrelevant to the discussion. How someone is supposed to make the argument that a religiously mandated death penalty for homosexuality can be argued against without touching the underlying theology and rationality he does not say.
Freddie doesn’t care and that’s his right, but if he wants to make the argument that none of the rest of us should care either, he’s going to have to come up with a better argument than that.
Let’s talk tactics, shall we? This emailer with the terrible reading comprehension and I have as a first goal the same thing, which is keeping religious conviction out of politics, science and medicine. The history of the world teaches us that this is best accomplished not through atheism but through religious moderation. This is something many atheists must come to grips with if they are ever going to grow up: religious moderates do a far better job of opposing extremists than atheists do. Look, aside from all of the “American theocracy” hysterics, this country does quite a good job of keeping the secular and the religious separate. There is much work to be done, but this is not Saudia Arabia, it is not Yemen. And why? Not because of atheism, but because of moderate religious people who have worked to divide theology from governance for centuries. When people express incredulity at the idea that people can both be practicing and religious and yet function in a secular society, I wonder what world they live in. Here on Planet Earth, in America, you interact with such people every day. They seem to have no trouble with it whatsoever.
Look to the Muslim world. Indonesia is the largest Muslim country in the world. It has a significant Muslim minority. And yet it also has significant Christian, Hindu and Buddhist minorities that live quite unmolested. Women wear pants, work in public, vote, hold office. Why? Not because some tide of atheism swept through Indonesia, but because of religious moderates embracing Enlightenment values and liberal democracy. I assure you, the large majority of these people are devout. They simply see no conflict between their religious devotion and their participation in civic life. If denying terrorism or other kinds of religious extremism can come only through the enforcement of atheism– if I am compelled, as this emailer insists, to wish to convert the unfaithful– then the prospects of liberal democracy and Enlightenment values are threatened indeed. Those values defend the religious as well as the areligious.
There is a lot of nonsense in the competing claims of the public face of atheism, but none is more obvious than what claims it is credulous to and what it is overly skeptical about. Many atheists, presumably like this emailer, have overly skeptical opinions about the ability of most religious believers to balance religious and civic life. Again, you probably know many people who believe, go to church, and yet never think to inject their religion into politics. Balanced against that is a frankly absurd naivete about the power of argument to convince people to abandon God or religion altogether. Which do you think is easier? To convince someone who has religious faith to totally abandon that identity? Or to convince them of the righteousness of dividing it from political life? Elementary human psychology teaches me that the more you attack the fundamental basis for someone’s worldview, the more likely you are to earn violent pushback as a result. If you are a liberal, you don’t try to bring a conservative around on a particular issue by asking him to abandon conservatism altogether. You ask him to reconsider the issue at hand, and you do so in a way that demonstrates respect to that larger overarching belief.
This is not fun. You can’t post a vlog about it on Youtube and get people applauding you for it. You can’t posit that you are one of the few brilliant geniuses in a sea of idiocy by doing it. You can’t come up with all sorts of self-aggrandizing narratives with it. But it is the basic task of liberal democracy and it is the path of adulthood.
The Patriots lost to the undefeated Colts in unbelievable fashion last night. Leading, 31-14 in the fourth quarter, and 34-21 with 2:30 remaining, the Patriots took the choke and lost to their hated rivals, 35-34.
So the conference is gone, the playoff bye is probably bye-bye, and the (6-3) Patriots are saddled with a loss that will haunt them for the rest of the season.
And Belichick gets the blame. Too smart for his own good this time. The sin of hubris.
Here’s the situation: With the Patriots leading, 34-28, and 2:08 remaining, Coach Hoodie elected to go for a first down rather than punt when he faced fourth and 2 from his 28-yard line. Guess he was afraid of what Peyton Manning might do.
Tom Brady’s fourth-down pass to Kevin Faulk was complete but inches shy of the first down. So the Colts took over and went 29 yards in four easy plays, winning the game when Manning connected with Reggie Wayne on a laser-like 1-yard pass with an unlucky 13 seconds left on the clock.
Ouch. Bob Kraft’s $9 million federally funded footbridge project just became a bridge over troubled waters.
This game was in the win column. A Stephen Gostkowski field goal with 4:12 left made it 34-21. Unfortunately for New England fans, Belichick elected to play soft defense and Manning quickly had the Colts in the end zone. It was 34-28 with 2:23 left. Then came the tragic set of downs and Belichick’s bold and crushing gamble.
In the postgame confusion, Belichick twice made a reference to the Patriots trying to gain 1 yard.
“I thought we could get that yard,’’ he said.
Asked if he knew the team needed 2 yards, Belichick said that he did. But then he said, “I don’t know how we could not get a yard on that.’’
Brady was simply spectacular in defeat. It was the 2007 Tom. He completed 29 passes for 375 yards and three touchdowns. Ditto for Randy Moss, who caught nine passes for 179 yards and two touchdowns. The Patriots shredded Indy’s depleted secondary, scoring 24 straight points in the first half, then bolting to a 31-14 lead early in the fourth.
I respect Bill Belichick more today than I ever have.
Last night he made a decision in the final minutes that led his team the New England Patriots to defeat. It will likely go down as one of the most criticized decisions any coach has ever made. With his team leading by six points and just over two minutes left in the game, he elected to go for it on fourth down on his own side of the field. His offense failed to get the first down, and the Indianapolis Colts promptly drove for a touchdown.
He has been excoriated for the choice he made. Everyone seems to agree it was a terrible blunder.
Here is why I respect Belichick so much. The data suggest that he actually probably did the right thing if his objective was to win the game. Economist David Romer studied years worth of data and found that, contrary to conventional wisdom, teams seem to punt way too much. Going for a first down on fourth and short yardage in your end zone is likely to increase the chance your team wins (albeit slightly). But Belichick had to know that if it failed, he would be subjected to endless criticism.
Of course, the problem with football — and politics — is that decision-makers are usually judged by the quality of the outcomes rather than the quality of the processes. So, the result in both worlds is often excessive risk-aversion.
And so this blog post might end with absolution for Bill Belichick and a plea for a stronger appreciation for expected-utility analysis. Except life is not that simple.
On that play, it appears that Belichick made the right call. Except that Belichick also did the following:
Called his last two time-outs during the series, thereby removing his ability to challenge a ruling on the field during the crucial play;
Decided, on third down and two, to call a pass play rather than a running play, which would have run more time off the clock and made the fourth down percentages a little easier.
Sooooo… it’s possible to defend Belichick’s call on fourth down as the rational, utility-maximizing decision, but conclude that he committed a series of small blunders that got the Patriots to the point where they had to convert a high-risk, high-reward play.
Question to readers: Looking at the Obama administration’s foreign policy, which move echoes Belichick’s play-calling?
Eventually, an NFL offensive juggernaut might start going for two after each touchdown, but that hasn’t happened yet. Coaches would rather have their players lose the game than the coach lose the game.
One thing to keep in mind, though, when apply two-point conversion rates to fourth-and-two rates is that two point conversions are usually attempted when the offense is hitting on all eight cylinders, while 4th and two attempts are made when the offense is sputtering.
So, all this theorizing is interesting, but you still have to execute on the football field, which the Patriots did not: Brady hit Kevin Faulk, running a pattern where he was coming back toward the line of scrimmage for a three yard gain, but Faulk juggled the ball and didn’t grab it firmly until he was only a yard past the line of scrimmage, turning the ball over to the Colts.
Not surprisingly, Peyton Manning marched them 29 yards for the winning touchdown.
I bow to none in my hatred of Boston sports teams, Boston sports fans, and generally any phrase that involves both “Boston” and “sports.” That said, I’ve always admired Bill Belichick’s willingness to be more aggressive than the average coach in terms of going for it on fourth down. The evidence is pretty overwhelming that most coaches are too conservative about this, and the rest of the NFL ought to take a clue from the fact that it’s the most successful organization in recent NFL history that’s most eager to push the envelop on this.
Last night, however, provided a great example of why most coaches do the wrong thing. New England went for it in a situation when most teams would have punted. And it didn’t work out. If they’d punted, people would be criticizing their defense. But since they went for it, it’s the coach who’s getting criticized. In coaching, like in banking, the safe bet is to make the same mistake as everyone else. But just because a call doesn’t work out 100 percent of the time doesn’t make it the wrong thing. The numbers show pretty clearly that going for it is the smart play.
The second-guessing started with the Belichick post-game press conference and hasn’t stopped since. Maybe it’s the shock of seeing Belichick, widely considered the smartest coach of his generation, have a call backfire. Or maybe it’s the shock of seeing such a rare decision. Whatever. Nobody seems to think he got it right. “Belichick call unrivalled,” writes the Boston Globe‘s Dan Shaugnessy–employing gentler language than what, I’m sure, callers into Boston talk shows are using this morning.
But statistics show pretty conclusively that football coaches are far too conservative about fourth down decisions–that they should go for it, rather than punt, far more frequently. Apparently, statistics also show that Belichick made the right call here, notwithstanding what everybody thinks. From the website “Advanced NFL Stats“:
With 2:00 left and the Colts with only one timeout, a successful conversion wins the game for all practical purposes. A 4th and 2 conversion would be successful 60% of the time. Historically, in a situation with 2:00 left and needing a TD to either win or tie, teams get the TD 53% of the time from that field position. The total WP for the 4th down conversion attempt would therefore be:
(0.60 * 1) + (0.40 * (1-0.53)) = 0.79 WP
A punt from the 28 typically nets 38 yards, starting the Colts at their own 34. Teams historically get the TD 30% of the time in that situation. So the punt gives the Pats about a 0.70 WP.
Statistically, the better decision would be to go for it, and by a good amount.
Update: Readers are already writing in to suggest that the pro-Belichick analysis fails to account for the specifics of the situation–it was Peyton Manning at QB, etc. Actually, that’s not true. I just didn’t excerpt those parts. Here’s more from Advanced NFL Stats:
You’d have to expect the Colts had a better than a 30% chance of scoring from their 34, and an accordingly higher chance to score from the Pats’ 28. But any adjustment in their likelihood of scoring from either field position increases the advantage of going for it. You can play with the numbers any way you like, but it’s pretty hard to come up with a realistic combination of numbers that make punting the better option. At best, you could make it a wash.
Defending Bill Belicheck’s indefensible decision to go for it last night on fourth and two seems to be becoming a movement and gathering steam. It’s driven, I think, by sports contrarianism, but never mind about that. The general argument you hear is that they had to try to go for it because Peyton Manning is unstoppable. That would make a lot more sense to me if the Colts hadn’t punted seven times and turned it over twice in that very game. By my count, the Colts had fourteen possessions; they were unsuccessful on nine of them. If you’re going to argue against punting because a team is unstoppable, this is not the game to do it.
Will we ever be able to think of Hannah Arendt in the same way again? Two new and damning critiques, one of Arendt and one of her longtime Nazi-sycophant lover, the philosopher Martin Heidegger, were published within 10 days of each other last month. The pieces cast further doubt on the overinflated, underexamined reputations of both figures and shed new light on their intellectually toxic relationship.
My hope is that these revelations will encourage a further discrediting of the most overused, misused, abused pseudo-intellectual phrase in our language: the banality of evil. The banality of the banality of evil, the fatuousness of it, has long been fathomless, but perhaps now it will be consigned to the realm of the deceitful and disingenuous as well.
[…]
In a long, carefully documented essay, Wasserstein (who’s now at the University of Chicago), cites Arendt’s scandalous use of quotes from anti-Semitic and Nazi “authorities” on Jews in her Totalitarianism book.
Wasserstein concludes that her use of these sources was “more than a methodological error: it was symptomatic of a perverse world-view contaminated by over-exposure to the discourse of collective contempt and stigmatization that formed the object of her study”—that object being anti-Semitism. In other words, he contends, Arendt internalized the values of the anti-Semitic literature she read in her study of anti-Semitism, at least to a certain extent. Wasserstein’s conjecture will reignite the debate over Arendt’s contemptuous remarks on certain Jews who were victims of Hitler in her Eichmann book and in her letters.
Could these revelations help banish the robotic reiteration of the phrase the banality of evil as an explanation for everything bad that human beings do? Arendt may not have intended that the phrase be used this way, but one of its pernicious effects has been to make it seem as though the search for an explanation of the mystery of evil done by “ordinary men” is over. As though by naming it somehow explains it and even solves the problem. It’s a phrase that sounds meaningful and lets us off the hook, allows us to avoid facing the difficult question.
It was the banality phrase—and the purported profundity of it in the popular mind—that elevated Arendt above the ranks of her fellow exile intellectuals in America and made her a proto-Sontag figure, a cerebral star of sorts and a revered icon in cultural-studies departments throughout America. It was the phrase that launched a thousand theses.
To my mind, the use of the phrase banality of evil is an almost infallible sign of shallow thinkers attempting to seem intellectually sophisticated. Come on, people: It’s a bankrupt phrase, a subprime phrase, a Dr. Phil-level phrase masquerading as a profound contrarianism. Oooh, so daring! Evil comes not only in the form of mustache-twirling Snidely Whiplash types, but in the form of paper pushers who followed evil orders. And when applied—as she originally did to Adolf Eichmann, Hitler’s eager executioner, responsible for the logistics of the Final Solution—the phrase was utterly fraudulent.
Adolf Eichmann was, of course, in no way a banal bureaucrat: He just portrayed himself as one while on trial for his life. Eichmann was a vicious and loathsome Jew-hater and -hunter who, among other things, personally intervened after the war was effectively lost, to insist on and ensure the mass murder of the last intact Jewish group in Europe, those of Hungary. So the phrase was wrong in its origin, as applied to Eichmann, and wrong in almost all subsequent cases when applied generally. Wrong and self-contradictory, linguistically, philosophically, and metaphorically. Either one knows what one is doing is evil or one does not. If one knows and does it anyway, one is evil, not some special subcategory of evil. If one doesn’t know, one is ignorant, and not evil. But genuine ignorance is rare when evil is going on.
Really? All evil comes from people who know what they’re doing is evil? In this account, a person who genuinely believes he is an instrument of divine justice or a savior of the fatherland or a dutiful soldier following legitimate orders cannot be evil. Rosenbaum suggests beliefs of this sort are rarely genuine, but is that entirely clear? A legal system should hold people accountable for crimes despite such beliefs, of course, but it does seem “important to the political and social sciences,” as Hannah Arendt put it, “that the essence of totalitarian government, and perhaps the nature of every bureaucracy, is to make functionaries and mere cogs in the administrative machinery out of men, and thus to dehumanize them.” And that people thus might perpetrate evil unknowingly or out of thoughtlessness or idiocy
[…]
At the same time, acknowledging that evil may be committed thoughtlessly as well as in full knowledge that “what one is doing is evil” seems better to equip us to guard against further atrocities. Not all evil is banal (In a postscript to her Eichmann book, Arendt emphasized that it was not “a theoretical treatise on the nature of evil,” but a report concerning a particular individual in a particular case), but “the banality of evil” at least captures the reality that people sometimes, if not usually, commit evils without the full knowledge that their actions are evil or the full intention to perpetrate evil acts — that often evil represents a failure of thought rather than its product.
Arendt, of course, did write a theoretical treatise. In his 1999 piece, Rosenbaum wrote of Arendt that “few would dispute her eminence as a philosopher, the importance of her attempt to define, in The Origins of Totalitarianism, just what makes totalitarianism so insidious and destructive.” Last week, however, he suggested we stop taking her thought seriously. What changed Rosenbaum’s mind were “troubling new revelations” that Arendt relied upon “anti-Semitic sources” when she wrote the Totalitarianism book. I’m not sure how the particular works Arendt consulted when composing an argument that Rosenbaum and others once found persuasive should now convince them that the argument was not persuasive after all.
I woke up to discover that more or less everything I wanted to say last night about Ron Rosenbaum’s misbegotten hit job on Hannah Arendt and her conception of the banality of evil has been said this morning at length by Steven Menashi at the American Scene. (Extra fun: in touching on Carlin Romano’s recent hit job on Heidegger, Menashi makes the point which I noted had gone entirely unmade in the long, hysterical combox criticism aimed at Romano: even Strauss, Heidegger’s great foe, insisted we couldn’t wave him away. This is relevant even for those who think Strauss and Heidegger were merely the Spy vs. Spy of Nietzscheans.)
Maybe Rosenbaum is right, but I never understood the phrase that way. I understood it as more of a warning that opportunities to commit evil could appear among the banal choices and duties of everyday life and that we must always be wary. I never thought that she thought she had the answer as to how or why this happened. (For that, we must turn to Stanley Milgram and others.)
And the warning is valuable in and of itself. In the past few years, we have seen literally life-and-death decisions about our captives reduced to games of legal and constitutional three-card monte.
For the sake of discussion let’s grant Rosenbaum his argument that Arendt was too close to Heidegger, that she allowed herself to be unduly influenced by both Heidegger and some of her own anti-Semitic sources, that almost to the end of her life she believed in some of the same Germanic notions that gave rise to Hitlerism. The idea that not all monsters have horns and tails is still worth keeping constantly in mind.
No, what makes me angry is the title of the column, in which Douthat directly analogizes Barack Obama’s Nobel win with Hurricane Katrina. That makes me angry. That makes me livid. Douthat calls the award a “travesty” in his column. That’s funny. To me, a travesty is when an American city is swallowed by the sea and our government and its apparatus of disaster mitigation sit mutely by, in the thrall of a pathetic imbecile and the mad, hideous and immoral ideologues that control him. That is a travesty. The American project sending such a loud and shrill message that we are okay with drowned bodies lying rotting the streets, provided the people those bodies once were were black and poor in life– that is a travesty, and a tragedy. That is a wholly preventable and totally unprecedented crime against this nation, its people, and their dream of what it could possibly be. And that sort of thing, Mr. Douthat, is not an appropriate analog for a president winning a prize, no matter how little you think of it.
Ah, but I hear the keys of Conor Friedersdorf clattering away now. That wasn’t me, he insists, and it wasn’t Ross! That, after all, is all you ever hear from conservatives these days. It wasn’t I who sent our soldiers into Iraq, it wasn’t I who left children to drown in New Orleans, it wasn’t I who ordered federal prosecutors fired for failing to politicize prosecution, it wasn’t I who sat idly by as the financial sector plunged itself off of an abyss…. The only consistent definition of conservative I now feel confident in is that a conservative is someone who is not responsible for anything that the Bush administration or Republican congress has done. No, no one is responsible for the Bush administration and its many crimes. No one is responsible for the congressmen who cheered their way along. No one is responsible for the systematic failure of the Republican party machine, which placed such a pathetic, unqualified and ignorant man in the greatest seat of power the word has ever known. No, don’t blame any actual conservatives for conservatism massive failings. Such a thing wouldn’t be fair. The fact that we now have outrage and scandal over Nobel peace prizes and NEA conference calls, when in the recent future we had hundreds of thousands of dead Iraqis and children shivering chest-deep in putrid water– hey, that’s a facet of the fact that no one is responsible for the GOP. No one is responsible for conservatism, and Freddie, stop being unfair.
This is the true consequence of conservatism’s never-ending series of rendings and divisions: because every conservative these days fancies himself a sect of sanity in a failed ideology; because so many conservatives have taken to patting themselves on the back for their distance from the rabid rump of the conservative base, and doing nothing else but that; because American conservatism has become an army of Andrew Sullivans, parties and cliques of people who proudly declare themselves to be of no party or clique, a never-ending stream of self-styled iconoclasts who take the rich pleasures of being individuals and take none of the hard-fought, difficult and tiring dignity of being responsible for something; because of this, conservatism is lost. The problem is not that conservatives fall too quickly in line. The problem is that conservatism is a line of people insisting that they aren’t a part of the line and as such are not responsible for the actions of the line. Everyone laments the Republican party’s various failures, electoral or otherwise; no one is responsible for the Republican party. Everyone delights in the rank, unfocused and violent anger of the Tea Parties; no one will claim them as their own. What you have, ladies and gentlemen, is an ideology in a decaying orbit, an ideology that prides itself on insisting on personal responsibility as so many, thanks to their well-polished, phony individualisms, refuse to take any responsibility for the whole. Conservatism is drowning because so many say (as Conor Friedersdorf insists when I criticize him) “Hey, it’s the OTHER conservatives who do THAT.”
That phrase has meaning when a pregnant woman tells a man, “take responsibility for the child you helped conceive.” It makes sense when a judge tells a negligent property owner, “take responsibility for the rabid Bengal tigers you’ve loosed to guard your unfenced suburban construction site.”
In his latest post, Freddie offers a vision of “taking responsibility” that is different, insofar as it is nonsensical and incoherent.
[…]
In fact, this Conor Friedersdorf “clattering away” on his laptop doesn’t think that no one is responsible for Hurricane Katrina’s unnecessary casualties. He thinks that responsibility is borne in various amounts by a long list of people that starts with Ray Nagin and ends with George W. Bush. What would it mean, exactly, for me to say, “I, Conor Friedersdorf, as a self-described conservative, take partial responsibility for the mismanagement following Hurricane Katrina?” Either it would be meaningless, or it would mean that I recognized some part of my political thinking that, prior to the hurricane, led me to wrongly believe that the federal government shouldn’t respond to natural disasters, or that levies shouldn’t be built to withstand strong storms, or that presidents should error on the side of committing too few resources when a major American city is underwater. Believing none of those things, I am hard pressed to know how I could coherently “take responsibility” for Hurricane Katrina even if I desperately wanted to do it.
[…]
fter the Iraq War, the PATRIOT Act, Abu Ghraib, reckless spending, the appointment of incompetents, and every other Bush-era ill, Freddie casts about for the problem on the right and decides that it is people like Ross Douthat and Andrew Sullivan who are to blame, due to their unwillingness to take responsibility for their ideas. It is difficult to imagine a more wrongheaded account.
In fact, the most disastrous policies of the Bush Administration — the Iraq war, the torture, and the irresponsible deficit spending — were all profoundly anti-conservative, and insofar as conservatism as opposed to jingoism or excess partisan loyalty among Republicans were to blame, the problem was precisely that the conservative base too easily fell in line behind an incompetent leader because they called themselves conservatives, and he called himself a conservative, and they’re the same word! Just ask the intellectually dishonest talk radio hosts, who acted as enablers for Bush’s most damaging policies by spreading the meme that one must support him in order to be a loyal conservative.
Regardless, Conor’s point above fails for a more basic reason insofar as it is specifically an attempt to defend Douthat against Freddie’s criticism: Douthat himself does not distinguish between the conservative movement and the GOP. Indeed, in his remarks at Princeton University yesterday, he spent several minutes explaining why he views the conservative movement and the GOP as “interchangeable” terms.
Again, it may be that no individual strain of conservatism can be viewed as consistent with the activities of the Bush Administration. But collectively, the amalgamation of all those strains of conservatism into one master ideology is what not only enabled those activities, it perhaps made them inevitable. For that, those interested in the notion of a conservative “movement” need to be prepared to accept responsibility if conservatism is to emerge from the wilderness as not merely an electable movement, but also a competent and coherent one capable of governing.
One area where Freddie has taken a bit of heat is for going after so-called reform conservatives for being unwilling to try to fix the problems with conservatism. For a long while, I thought this heat was deserved and that Freddie was being quite unfair to people who were clearly trying to do exactly that. And while two Ordinary Gentlemen do not a trend make, I read enough liberal blogs to see that their opinions are shared by quite a few on the Left, so while liberals may not have the disdain for the reformers that they have for the hardcore movement types, the reformers are hardly respected by liberals.
Meanwhile, the hardcore movement conservatives truly cannot stand the reformers, who they view as RINOS at best and traitors at worst. This animosity is even understandable since, to the extent the reformers even try to interact with the base, it is more often than not to criticize it for extremism in rhetoric or style.
This question has perplexed me for months: how is it possible for a group of well-intentioned conservative wonks to be so reviled by the Left, despite sincerely opposing the worst of the Right’s extremism and attempting to make the Right serious about governing again, and the Right, despite sincerely opposing most all of the Left’s agenda? It’s not as if these people are just squishy centrists and moderates – they almost always have a pretty clear set of principles underlying their actions.
Reading Jamelle’ s post, though, the answer finally became clear: the conservative wonks simply aren’t doing their jobs. What they are doing is picking apart liberal proposals, picking apart conservative proposals, attacking the low-hanging fruit of conservative extremism, and occasionally making suggestions to liberals on ways of either improving liberal proposals or making those proposals more palatable to conservatives. What they are not doing, and largely are not even trying to do, is to drive the GOP agenda. They are, in effect, content to leave the GOP agenda as little more than “vote no on everything” and tear down whatever the liberals do.
“But we have all these great ideas if liberals would only listen to us” comes the inevitable response. Which is all well and good up until you realize that liberals aren’t very interested in ideas that they can’t pass. Conservative wonks think health care reform would work better if it were individualized and decentralized? Great, say the liberals, so do many of us; now come back to us when you can deliver some Republican votes that will overcome the loss of support from the unions that this will entail.
And so the conservative wonks go home with their tails between their legs, and drop the subject just long enough to write op-eds about why the Dem health care proposals are terrible, awful, no good, very bad health care reform. Perhaps they contact Republican politicians and feed them some talking points for opposing the Dem health care proposals.
In what sense are they “content to leave the GOP agenda” as it is? What are the dissidents supposed to do when they can’t get a hearing on their own side? [N.B., I would be honored to be considered among the dissidents, but I’m interested in culture, not policy, so I don’t play a role like Brooks, Frum, Douthat and others.] If the dissidents didn’t criticize what they see as harmful, self-destructive aspects of contemporary conservatism, outsiders would consider them patsys afraid to tell the truth to their own side. But when they do, Mark criticizes them for only trying to engage the base by putting it down. How can they win? I think Steve Benen is closer to describing the state of play on the Right at the moment, in terms of openness to self-criticism and internal debate. You can’t convince people to change if they are not willing to take what you have to say seriously. It seems to me that the kind of things people like Frum et alia criticize among the conservative base are things that have to do with the ideological ossification that is preventing the GOP from thinking and acting creatively to change with the times, and to figure out how to make conservatism relevant and responsible in a different set of circumstances than that which brought Reagan to power. As long as the conservative base is more interested in the old custom of heretic-hunting than it is in thinking creatively, dissidents will have little or no power to affect the GOP agenda. And there’s not much they can do about it.
Perhaps we’re getting at what puzzles and galls me so much about recent posts at The League of Ordinary Gentlemen about how dissident conservative writers ought to conduct themselves. The notion is that these writers should assess an ideological subset of the American public, discern their sensibilities, and craft all subsequent writing so as not to offend them. What a fool’s errand. There are times when people react badly to hearing the truth plainly stated. It is a journalist’s job to tell them that truth anyway, as forthrightly and accurately as one can put it.
Do you want to corrupt public discourse? Ask those engaged in the fights over ideas to pull their punches whenever what they regard as the truth might upset a segment of the public. Tell writers that if they find wisdom in the political philosophy of conservatism, and desire that its insights be incorporated into the governance of American society, they ought to refrain from writing things they regard as true whenever doing so will cost them credibility among some folks with whom they’d share a political coalition in a more rational world.
Think what you’re asking! It’s as if you were to travel back in time to George Orwell at his desk writing Homage to Catalonia, and to say, “Sir, I know you’ve got a low opinion of certain folks who were fighting Franco, but among those who oppose fascism, you’re needed as a thought leader, so please don’t write too bitingly about the false propaganda spread by folks within Communism. Some of your Comrades will never take your subsequent writing seriously otherwise.”
When writing on politics, only one approach guards against the dozen rationalizations for making truth subservient. It is to write what one believes on whatever topics one deems to be important, whenever you’ve got what you regard as a significant contribution to that conversation. Should an ideology and the political movement most closely associated with it find itself full of writers who instead privilege loyalty, or the sensitivities of the base, or their future influence, what you get is a misbegotten war in Iraq, a systematic regime of officially sanctioned torture, reckless spending, a Congressional majority rife with corruption, incompetents appointed to important positions in the federal government, and the list goes on. You’d think that the “palatable” journalism so many right-leaning outlets served up during the Bush Administration would discredit the notion that such an approach is the right one for conservatism or the country as a whole.
What Conor is suggesting is that a war against the pundits – against Beck and Limbaugh, et al. – is a fight over ideas. I would argue that calling people like Limbaugh out for some stupid thing(s) he’s said is not in fact a battle of ideas. It’s just your classic personality politics. A number of dissidents on the right have fallen into this very trap, engaging their loud, swaggering opponents on their own terms rather than within the framework of ideas. And all this does is alienate the base.
Regardless of whether Conor or David Frum or any other dissident is correct in their assertions, what their actions achieve is alienation and excommunication from their supposed target audiences. Liberals laud the efforts of Charles Johnson who has recently been calling out the conservative shenanigans, but in a lot of ways all that Johnson has achieved is to distance himself from the conservative movement. What good has that done for conservatism?
One door opens – a population of independents and liberals that is very receptive to attacks on their least-favorite television and radio personalities; and one door closes – the conservative base which, however misguidedly, marches behind the Limbaughs and Levins of the world. Instead of fragile allies, they’ve become sworn enemies.
My critique is simply this: engage in a fight over ideas, often and passionately. But engage. Don’t try to unseat the champions of the right. Try to change their hearts and minds, or at least use them to reach their audiences. It’s not as flashy or as fun, but I think it will serve a better purpose.
It’s clear to me that Conor and to a lesser extent Rod don’t understand what Jamelle, Freddie, E.D., and myself have been driving at in our various critiques of reform-minded conservatism.
[…]
Our point has nothing to do with insisting that Conor or anyone else soft-pedal their critiques of Limbaugh, et al, although those attacks may well have the effect of making matters worse. It certainly does not suggest that reform-minded conservatives should refrain from objecting to torture or the conduct of the War on Terror or civil liberties violations by the Bush Administration – quite the contrary, Ron Paul’s growing influence on conservatism shows that it is possible to passionately dissent without forfeiting the ability to move conservatism in your direction. Nor do I think we are suggesting that Conor or any other specific reform-minded conservative is to blame for the current state of the Republican Party.
No, the point is that reform conservatives need to recognize that there is an ideological problem with conservatism as currently constituted as an amalgam of libertarianism, hawkishness, and religious fundamentalism that leaves modern conservatism incapable of governing well or ethically. It is all well and good to criticize the Bush Administration or to take issue with talk radio, but until reform conservatives recognize what caused the Bush Administration’s faults and the hyper-vitriol of talk radio, they will be unable to do anything about it.
The assumption of many reform conservatives seems to be that the Bush Administration and talk radio are just a few bad apples who managed to deceive conservatives into thinking that they were good conservatives and had all the answers. This is wrong, and smacks of a paternalism that assumes workaday conservatives are pliable, easily fooled automatons rather than people who are simply too concerned about putting food on their own plates to ask otherwise unimportant philosophical questions like “what does it mean to be a conservative” or ”what would Edmund Burke say.”
The problem instead is that movement conservatism has become an incoherent ideology, in part because of its own successes, but also in part because the issues facing this country simply are not the issues that were facing it in 1978. An ideology that attempts to unite so many disparate sub-ideologies must inevitably become nihilistic and unable to articulate a compelling social or political vision much beyond “we’re not them” after its initial raisons d’etre have become obsolete.
Much as I hate to say it, the major obstacle to reforming conservativism is the talk radio/Fox/RedState axis. If you offer up ideas outside of their core, you’re a “RINO.” Or you want to be liked at Washington cocktail parties. Or you want a job at the New York Times. In other words, you’re not a “real” conservative. This is a tough obstacle, because even though they don’t call all the shots in the party, they call the shots of the activists and donors, and even the Congressional votes of the “moderates.” (You’ll note that McCain took a sharp right turn during the 2008 elections and hasn’t strayed from the path since.)
However, I agree that Conor Friedersdforf, David Frum, et al. are making a mistake in tackling Limbaugh, Levin, Beck, etc. head on. Instead, they need to engage them. They need to take their concerns seriously (crazy as they might be) and judo flip them to the reform path that they want. Whether that’s czars, Iran, taxes, “socialism” or what have you, the task that reform conservatives need to take upon themselves is to address those concerns in a serious way, and offer up conservative alternatives to the traditional conservative solutions.
Even though I don’t really consider myself a conservative, I do consider this to be a serious issue. Democracy needs healthy debate and discussion of policy in order to succeed. As long as conservatism is mired in the state that it’s in, the Democrats are going to win by default. And that’s not a good thing.
Maybe Mark is right, but I know that I’m not in that camp. I’m also, to be fair, not in the camp of conservatives who think seriously about policy matters. so there’s that. But I agree with him that contemporary conservatism no longer makes a lot of sense as a confederation. I tried in my book to suggest ways that conservatives could fundamentally change our lives (and policies) to be more true to the authentically conservative values. Maybe I was, and am, full of shinola, but at least I’m trying to rethink this thing. What I’m not sure Mark et al. understand is how difficult it is at the present time to get a hearing on the right for dissenting views, even those that don’t just bash Beck or Limbaugh. Again, it’s what you get when you have a movement that cares more about hunting heretics who deviate from a rigid ideal than a movement that’s interested in reinterpreting its principles for changing times and changing conditions. And believe me, on the right today, if you offer anything seen by the Fox/Limbaugh mothership as deviant, you’ve got no chance to be engaged.
While I agree that it’s fairly pointless, as a tactical matter, for dissidents to attack the talk radio giants, this comes, I think, out of a deep frustration that people with little more than slogans and attitude have bigfooted discussion among conservatives, and have helped turn the GOP and the movement into something that’s extremely hostile to change (as distinct from skepticism of it, as all real conservatives should be), and almost fanatically opposed to dissent from within. A fairly conservative friend of mine and I were talking the other day about something Glenn Beck had said, and my friend looked disgusted, saying, “I’m sick of being associated with conservatives.” The impulse to take on the Becks and the Limbaughs comes from a sense that these guys are hurting us bad, and preventing the kind of clear thinking that we need to get back in the political game. I’d love to know how Mark and the League propose for dissident conservatives to “engage” the base when the kind of people the base trusts and takes its cues from demonize dissidents as RINOs, closet liberals, squishes, wets, suck-ups, and so forth. I’m asking seriously. I don’t know how to go about this in the current climate.
There is another way to describe this conservative idea. It is the ideology of Ayn Rand. Some, though not all, of the conservatives protesting against redistribution and conferring the highest moral prestige upon material success explicitly identify themselves as acolytes of Rand. (As Santelli later explained, “I know this may not sound very humanitarian, but at the end of the day I’m an Ayn Rand-er.”) Rand is everywhere in this right-wing mood. Her novels are enjoying a huge boost in sales. Popular conservative talk show hosts such as Rush Limbaugh and Glenn Beck have touted her vision as a prophetic analysis of the present crisis. “Many of us who know Rand’s work,” wrote Stephen Moore in the Wall Street Journal last January, “have noticed that with each passing week, and with each successive bailout plan and economic-stimulus scheme out of Washington, our current politicians are committing the very acts of economic lunacy that Atlas Shrugged parodied in 1957.”
Christopher Hayes of The Nation recently recalled one of his first days in high school, when he met a tall, geeky kid named Phil Kerpen, who asked him, “Have you ever read Ayn Rand?” Kerpen is now the director of policy for the conservative lobby Americans for Prosperity and an occasional right-wing talking head on cable television. He represents a now-familiar type. The young, especially young men, thrill to Rand’s black-and-white ethics and her veneration of the alienated outsider, shunned by a world that does not understand his gifts. (It is one of the ironies, and the attractions, of Rand’s capitalists that they are depicted as heroes of alienation.) Her novels tend to strike their readers with the power of revelation, and they are read less like fiction and more like self-help literature, like spiritual guidance. Again and again, readers would write Rand to tell her that their encounter with her work felt like having their eyes open for the first time in their lives. “For over half a century,” writes Jennifer Burns in her new biography of this strange and rather sinister figure, “Rand has been the ultimate gateway drug to life on the right.”
The likes of Gale Norton, George Gilder, Charles Murray, and many others have cited Rand as an influence. Rand acolytes such as Alan Greenspan and Martin Anderson have held important positions in Republican politics. “What she did–through long discussions and lots of arguments into the night–was to make me think why capitalism is not only efficient and practical, but also moral,” attested Greenspan. In 1987, The New York Times called Rand the “novelist laureate” of the Reagan administration. Reagan’s nominee for commerce secretary, C. William Verity Jr., kept a passage from Atlas Shrugged on his desk, including the line “How well you do your work ... [is] the only measure of human value.”
[…]
In essence, Rand advocated an inverted Marxism. In the Marxist analysis, workers produce all value, and capitalists merely leech off their labor. Rand posited the opposite. In Atlas Shrugged, her hero, John Galt, leads a capitalist strike, in which the brilliant business leaders who drive all progress decide that they will no longer tolerate the parasitic workers exploiting their talent, and so they withdraw from society to create their own capitalistic paradise free of the ungrateful, incompetent masses. Galt articulates Rand’s philosophy:
The man at the top of the intellectual pyramid contributes the most to all those below him, but gets nothing except his material payment, receiving no intellectual bonus from others to add to the value of his time. The man at the bottom who, left to himself, would starve in his hopeless ineptitude, contributes nothing to those above him, but receives the bonus of all of their brains. Such is the nature of the “competition” between the strong and the weak of the intellect. Such is the pattern of “exploitation” for which you have damned the strong.
[…]
Ultimately the Objectivist movement failed for the same reason that communism failed: it tried to make its people live by the dictates of a totalizing ideology that failed to honor the realities of human existence. Rand’s movement devolved into a corrupt and cruel parody of itself. She herself never won sustained personal influence within mainstream conservatism or the Republican Party. Her ideological purity and her unstable personality prevented her from forming lasting coalitions with anybody who disagreed with any element of her catechism.
Moreover, her fierce attacks on religion–she derided Christianity, again in a Nietzschean manner, as a religion celebrating victimhood–made her politically radioactive on the right. The Goldwater campaign in 1964 echoed distinctly Randian themes–“profits,” the candidate proclaimed, “are the surest sign of responsible behavior”–but he ignored Rand’s overtures to serve as his intellectual guru. He was troubled by her atheism. In an essay in National Review ten years after the publication of Atlas Shrugged, M. Stanton Evans summarized the conservative view on Rand. She “has an excellent grasp of the way capitalism is supposed to work, the efficiencies of free enterprise, the central role of private property and the profit motive, the social and political costs of welfare schemes which seek to compel a false benevolence,” he wrote, but unfortunately she rejects “the Christian culture which has given birth to all our freedoms.“
One thing that does always strike me about Rand, however, is that there strikes me as something particularly odd about the Randian tendency to assume that the business executive class generally constitutes the most intelligent segment of society. As if an Albert Einstein is just a kind of middleweight hack but the VP for Marketing at Federal Express is one of ubermenschen.
As Chait points out, Rand plumped for Wilkie in 1940, but she was no Republican. More to the point, Rand did not think income and wealth represents a sign of virtue — of hard work, productivity, or anything else. Being an intelligent person, she thought that who got how much of what depended on the complex interplay of culture and the structure of the political economy. She did think that those who through effort or industry improve others’ lives ought to see the value of their work acknowledged and rewarded in some form or other. But no one would infer from Rand’s novels and nonfiction that the United States looks, or in her day looked, anything like her ideal.
[…]
Rand does not valorize the wealthy. She valorizes the uncompromising integrity of creative visionaries and the productivity of inventors, innovators and entrepreneurs. But there is little to assure the reader that the virtues she extols really pay. Rand’s view of the world was actually pretty bleak, pretty Russian. Her best novel, We the Living, is best precisely because she had yet to philosophically suppress her tragic instincts. One of the least plausible and certainly the saddest aspects of Rand’s thought is what she called the “benevolent universe premise” — a kind of as-if attitudinal stance of positivity meant to ensure “the inability to believe in the power or the triumph of evil.” She goes on:
“No matter what corruption one observes in one’s immediate background, one is unable to accept it as normal, permanent or metaphysically right. One feels: “This injustice (or terror or falsehood or frustration or pain or agony) is the exception in life, not the rule.” One feels certain that somewhere on earth—even if not anywhere in one’s surroundings or within one’s reach—a proper, human way of life is possible to human beings, and justice matters.”
“One feels…” This is Rand’s leap of faith, her animal spirit, her will to believe. She needed her silly, contrived happy endings — and she thought we needed them — to maintain the will to do the right thing, to fight for justice, despite every indication that it’s a bad bet. Rand thought we need to feel that effort and virtue will be rewarded, or else we will, rationally enough, stop supplying effort and virtue. And then we’ll all be good and truly screwed. Make of this what you will, but it is very far from the vulgar Calvinism that sees a person’s level of success as an indicator of their merit.
Now, I’m more than willing to snicker right along with Chait at ridiculously puffed-up computer engineers who threaten to “Go Galt” at the first hint of an impending tax hike while blithely enjoying the wage subsidy of the United State’s super-stingy H1-B visa cap. But he’s really just careless in conflating the views of Ayn Rand’s confused fans with Ayn Rand’s own. I’m delighted there are two important new books that take Rand seriously as a woman, writer, and thinker. It’s too bad that Chait uses their publication as an occasion to once again take a brave stand for the redistributive state.
Jonathan Chait reviews two (!) biographies of Ayn Rand, an astoundingly muddled thinker who was, apparently, also an astoundingly unpleasant human being. She’s worth studying, as any pathological phenomenon is worth studying, and her thinking (if it can be called that) still has influence over part of the Right; her very shallowness has a deep appeal for adolescent males of all ages and both sexes.
What’s most astounding is how completely unoriginal it is. A college friend showed me some Randite document just after I’d finished reading Also Sprach Zarathustra for a course.
Chait might be aware that he isn’t really jousting with Rand per se with all this material–he’s explicitly arguing with the likes of Stuart Varney, Greg Mankiw, unnamed stereotypical arrogant “rich people,” and Irving Kristol. But by spending so much of an essay ostensibly about Rand on these points, he’s misleading his readers about what Rand thought and why.
In addition, Chait’s anecdotal points in his review’s lead showing certain GOP-leaning public figures are seeming to rely on quasi-Randian rhetoric don’t support the belief that the American right is going through a significant Randian moment. Rand is far, far too radical a small-government libertarian for most of them to tolerate, much less emulate.
I’ll close with this Chait quote, from after he notes that both Ayn Rand and Grover Norquist have childhood memories of their parents taking from them things they thought of as theirs: “The psychological link between a certain form of childhood deprivation and extreme libertarianism awaits serious study.” It certainly does, and will probably continue to await it for a very long time: because it’s utterly irrelevant on any conceivable level when it comes to understanding or judging libertarian thought.
The essential takeaway, besides a thoroughly fair but damning discussion of Rand’s life and legacy, is to point out (as we must keep repeating) that the rich pay remarkably less in taxes that the did just decades ago, and that they are vastly more wealthy than they were then, and yet still we hear complaints of tax tyranny and the oppression of the rich. And, as Chait says, because the reduction in taxes over the last decades and the almost unbelievable wealth creation for the economic elite is so inarguable– so Objectively true, if you will– they argument is framed in the language of tyranny, theft, and moral degradation.
Given the rise of the tea party movement, his basic point – that Rand’s influence has led to an over-emphasis on a morally absolutist view of redistribution – is pretty relevant. My own view is that Rand is best understood as a product of a very specific political and cultural context; if her philosophy and subsequent influence overstates the role of individual merit in determining success, it’s probably because the mid-century consensus was weighted too far in the other direction. In other words, I think we can appreciate Rand as a necessary corrective to an overly-deterministic view of individual achievement without subscribing to her crazy philosophy. Incidentally, Brian Doherty’s excellent history of the libertarian movement has a good survey of Rand’s peculiar cultural influence.
There’s been an Atlantic round-up already on this subject by Mara Gay over the original Rosin post. But we’ll do our cutting (ahem) and pasting job as well.
But the procedure is only “controversial” because people have emotional, psychological and religious reactions to it. Scientifically speaking, it’s not remotely controversial. The anti-circumcision sites always refer to the American Academy of Pediatrics’ 1999 policy statement on circumcision, which declined to recommend the procedure. But that statement was issued before the most compelling studies emerged about the role circumcision plays in reducing the risk for transmission of HIV and other STD’s. This is a good overview from medical writer Arthur Allen.
Actually, not one enraged commenter on yesterday’s NYT article about the possibility of the CDC recommending circumcision as an HIV preventative raised that question. But the fierce opposition that still surrounds the HPV vaccination for girls centers around exactly that. If both procedures might make unprotected sex marginally safer, why is the conversation so different?
I’m not actually opposed to the CDC recommending circumcision—especially since the main effect of the recommendation would be that an always-optional procedure would remain optional, but be once again covered by Medicaid. Circumcision appears to reduce the risk of contracting the HIV virus through sex with an infected (female) partner by about 60 percent. The HPV vaccine prevents “some types” of genital warts which “may” cause cervical cancer. Neither’s a slam dunk, but both might make a night of unprotected sex a less risky proposition in the long term. And teens claim to consider the risks of HIV when making the decision about whether to have sex, while HPV remains low on their radar. So it wouldn’t be unreasonable to suggest that being circumsized—along with a nice public health campaign promoting the reduction in risk—might make a teen boy feel even less mortal. But it didn’t come up.
I WAS shocked to read Hanna Rosin’s post noting that the CDC was considering requiring circumcision for all American baby boys. And I was reassured to find that Ms Rosin had mischaracterised the New York Times article she referenced. In fact the CDC is simply considering nudging its recommendation on circumcision to a more positive slant, because conclusive evidence from studies in Africa shows that circumcision reduces men’s chances of getting HIV through heterosexual sex by about half. That’s a pretty huge public-health benefit, considering that America has HIV prevalence rates several times higher than European ones, with a 2% prevalence rate among blacks that is higher than most third-world levels. HIV in America spreads chiefly through injecting-drug use and male-to-male anal sex (where benefits from circumcision have not been shown), but multiple partner heterosexual sex is also an important vector, and circumcision has been shown to inhibit the spread of other sexually transmitted diseases too. Basically, on the medical side, the evidence favours circumcision.
On the cultural side, obviously, the decision to circumcise is a lot touchier, and that’s why I wish Ms Rosin had been more careful with the distinction between “require” and “recommend”. Growing up Jewish in America, where the great majority of boys of all religions have been circumcised for decades, I never considered the issue a big deal; scenes in movies like “Europa, Europa”, where a Jewish boy strains to hide his penis in the bathroom for fear of discovery by Nazis, seemed alien and antiquated. But then I had a son in Europe, where boys are not routinely circumcised, and where in fact simply finding a doctor who will perform the procedure is a royal pain. (This is a big issue for Muslims in Europe, incidentally.) Finding a Jewish mohel who would circumcise a boy with a non-Jewish mother was a non-starter, too. And I pretty quickly realised that for men, for deep-seated psychic and cultural reasons, ensuring that your son’s equipment looks like your own, and does not renounce his membership in a tribe you belong to, can be a very big deal.
“Male genital mutilation!” scream the connoisseurs of uncut, preservationists of the precious prepuce.
Get over it, people. Only porn freaks and gay men — having ample opportunity to comparison shop, as it were — obsess so fanatically over the difference. As I was taught in commercial design classes 30 years ago, form follows function, and familiarity with the fact of foreskinless functionality (i.e., I’ve fathered six kids) indicate my circumcised state is entirely adequate to the rigors of the task.
But the procedure is only “controversial” because people have emotional, psychological and religious reactions to it. Scientifically speaking, it’s not remotely controversial.
Not only is this nonsense, it’s insulting nonsense. The first thing to say is that, in fact, there are very many rational reasons to oppose routine circumcision. The first is to point out the simply bizarre notion of recommending preventative surgery to all Americans to prevent a condition that afflicts a tiny minority of Americans. Something along the lines of a third of a percentage point of our population has HIV. (All stats courtesy of the CDC.) I know that the efforts to raise AIDS awareness is undertaken in good faith, but the simple fact, obscured by people with good intentions, is that AIDS and HIV are extremely rare in the United States, and theaverage American has very little to fear contracting HIV. That’s just the numbers.
[…]
Rosin is the latest in a long line of pro-circumcision commentators who attempt to paint all of those who are opposed to routine circumcision as a hysterical fringe. That’s an empty rhetorical tactic, of course, but at times an effective one. Yet it seems clear to me that the people who have the burden of proof are those who want to enforce a permanent and body-altering surgical procedure on an entire gender, and it equally seems to me that they have not even begun to meet that burden of proof. I do believe that there is promise in using circumcision in sub-Saharan Africa, where vastly different logistical realities make the use of circumcision, as part of a comprehensive campaign against AIDS, an intelligent strategy. But to extend that wisdom in an incredibly tenuous way to the, yes, tiny risk of HIV and AIDS for the people who would benefit does not follow, and I find that they case as argued is laughably thin.
So why does such a powerful pro-circumcision movement in the United States exist? Because here, unlike in the rest of the world, circumcision is the norm, and people– particularly people who believe themselves to be socially liberal– love to use the language of science and medicine to enforce norms. Religious and cultural preference (”it looks weird if you don’t do it”) pushes Americans to circumcise their children when there is no rational benefit to doing so, and those most interested in enforcing that norm have been grasping around for justification in the realm of medicine. That’s the only reason I can imagine for a movement so incredibly zealous and assured about statistical and logical information that is so obviously insufficient to make the argumentative case they are trying to make.
I’d forgotten how passionate Dish readers, and Andrew, are on the subject of circumcision. Andrew once published a photo of this “gruesome procedure” which has the feel of one of those pro-life placards – after which I probably rescinded his invitation to my son’s bris. To make things worse, my post defending circumcision taps into the current fears about “big government trying to mandate certain types of medical procedures,” as one reader wrote in.
The objections to my post fall into three basic categories:
1. How can we do this to a child without his consent? There are so many things we do to children without their consent – change their school, banish their friends, give them drugs, abandon and neglect them. Removing a foreskin should not even fall in the top 20 ways to ruin your child’s life.
2. “Foreskins are, well, fun,” writes one gay reader. My authority here is obviously limited. That said, all that research of specific areas of male sensitivity (Andrew cites some here) has always struck me as dubious. Erotic pleasure is a rich and complicated thing. Specific percentages of sensitivity can’t possibly sum up the experience.
3.Preventative surgery is a “bizarre notion.” This is somewhat more convincing. But for one thing, “surgery” is a bit of an exaggeration. We certainly cause infants minor pain for the greater public good many times, in the form of vaccines. It depends, I suppose, whether you consider HIV and STD’s a widespread public health crisis, or something affecting only a very few. I could get into the specifics of the research here, but I won’t.
There are obviously strong, visceral emotions here which I confess, I don’t really understand.
Is sensitivity that important to begin with? Well duh, of course it is on a basic level. But what if a slight decrease in sensitively actually heightens sex overall? In other words: The guy lasts longer. And that’s generally better for everyone involved, no?
When I checked out Wikipedia’s many cited studies on “ejaculatory function,” most are not statistically significant – and those that are balance out. So my hunch seems unfounded. Furthermore, what if decreased sensitivity from circumcision hinders the game to begin with? But studies on “erectile function” are also inconclusive. So are those comparing “satisfaction.”
Studies are a red herring, however, when it comes to the ethical part of the debate. Even if there are no discernible differences between cut and uncut on average, there are still many individuals who are better or worse off from a procedure their parents imposed. As one reader puts it:
It’s my dick. It’s my dick. It’s my dick. It is no one else’s dick but my dick. And I should have the choice to circumcise it when I am old enough to make that decision.Then again, if you were circumcised as a newborn, how would you ever know the difference? Wouldn’t your range of sensitivity adjust accordingly? (Unless the procedure was botched, of course.)
But with respect to the practice of circumcision, the important point is this: he’s my son. Not yours. Parents have the right to decide on medical treatment for their children, presuming such medical treatment is not actively harmful. And parents have the right to include their children in cultural rites and practices, again presuming no harm is done.
In the case of circumcision, the evidence shows that it prevents the transmission of HIV and other STDs. There is some disputed evidence, on the other hand, that it reduces sexual pleasure; and there are some ludicrous and hysterical people claiming that it damages the bond between mother and child. This certainly sounds plausible; we all know Jewish men don’t enjoy sex, and have trouble bonding with their mothers. Not.
What, then, of female circumcision? Well, I understand, perhaps wrongly, that there are some forms which are not particularly medically invasive, and which do not entail significant medical consequences. I think that such forms of female circumcision are a matter of cultural practice that should be left up to parents to decide. The more invasive forms of female circumcision entail serious negative medical consequences. Obviously that’s not cool. And female circumcision is carried out on girls aged 7 to 12 or even older; at that age, the child gets a vote, too. In any case, this doesn’t have much to do with anything, because we’re talking about a medical recommendation.
If I was an anti-circumcision zealot, I would not recommend circumcision for men living in sub-Saharan Africa. And I would likely not want parents to circumcise their children on religious grounds. But I do think circumcision is a very important tool for preventing the spread of HIV and AIDS in sub-Saharan Africa; I think men should be encouraged in those cultures to be circumcised; and I think we should provide funding and education for them to do so in clean, sterile conditions. I additionally, of course, believe that ultimately parents are empowered to make the decision, whether for religious observation or whatever else. What I ask, however, is that a procedure with almost no proven medical benefit whatsoever for Americans not be recommended as a universal procedure for an entire sex based on a reduced risk of a disease when the people who are protected from that disease by the procedure don’t get the disease in the first place. And I want those arguing for routine circumcision to be more honest about who, exactly, is being zealous. Shouting “Lose the foreskin!,” as Rosin’s first post did, demonstrates that she is taking exactly the wrong kind of attitude towards the issue, and reveling in a lack of sensitivity or regard for the concerns of people who don’t think routine surgical procedures for negligible medical benefit make sense.
The only reason I can think of that Chris Bodenner and Hanna Rosin are not being honest about the number of people infected in the United States, and the essentially mythical nature of HIV infection from heterosexual sex in the United States, is out of some dedication to political correctness. In a very well-intentioned but ultimately harmful way, those pushing for AIDS awareness in the early and mid-90s ended up developing many myths about HIV and AIDS, particularly the size of the disease here in America (HIV is most certainly not a pandemic in the Western world) and of who catches the disease. Outside of our dedication to political correctness, the simple fact is that HIV and AIDS, outside of sub-Saharan Africa, afflict two groups of people, gay men and those who use intravenous drugs. That we have elided respect and love for the people who have the disease, and a dedication to fighting it, with the sympathetic lie that everyone has to fear HIV and AIDS, tells you something about our culture.
Freddie makes a related point but goes on to say that “HIV is most certainly not a pandemic in the Western world,” and that the threat to heterosexual non-IV drug users is greatly exaggerated. A third of HIV infections are heterosexual and in certain regions of the country, such as DC, heterosexual transmission is ahead of male to male transmission. African-Americans also have much greater risks. This is not to argue for or against circumcision, I’m agnostic on the issue, but under-emphasizing the risk to certain heterosexuals is a greater sin that exaggerating it.
While the fixed vote in Iran received extensive international attention, the world paid no notice to an honest election in Indonesia — the world’s largest Muslim nation. As recently as the 1970s, Indonesia was a repressive military dictatorship; gradually it has become a democracy with a civil-society basis and freedom of speech, plus strong economic growth. And America did not force this outcome on Indonesia or, for that matter, have anything to do with what happened — Indonesians made their nation a democracy entirely on their own. Why do the same politicians and pundits who have limitless breath to denounce the troubled Muslim nations say nothing about the success story of Muslim Indonesia? Islamist fanatics hate freedom in Indonesia as much as they hate it in the United States and Europe, and have committed awful crimes against Indonesia democracy. But the world only notices Indonesia when a bomb goes off there — how about some notice for social and economic progress?
Freddie at The League highlights that quote. Roque Nuevo in the comments:
I have to write in to agree with Freddie, lest he think I’m here just to bug him. I also admire Indonesia’s political culture and deplore our black history of intervention there. Wasn’t Indonesia’s first president, Sukarno (?) the Quisling leader for the Japanese in WWII and responsible for enslaving his countrymen in their war efforts? Wasn’t he later a champion of third-world nationalism, etc etc.?
Indonesia is a great exception in the Arab/Muslim world for its pluralism and tolerance. It’s notable that Islam arrived to Indonesia not by the famous Islamic Sword but through the peaceful efforts of Muslim traders. People in Indonesia accepted Islam because they wanted to not because they were conquered. Could this historical tidbit be a factor in its political culture today?
Just to add a bit of info: the slaughter in East Timor is part of Kissenger’s legacy to our great nation and to Indonesia. Kissenger is still alive and kicking, by the way. His latest assignment reprises his favorite national security role: back-channel contact for the White House. Today he’s Obama’s back-channel to Putin/Russia. Just one more of the many parallels between Obama and Nixon.
Freddie’s reply:
I think the story of Indonesia, and the Year in particular, is that part of the failure in looking at foreign policy as a series of good/bad actors is understanding how fluid internal conflict is, and how many loyalties are a product of intra-national real politik. I don’t want to refight the old arguments about 1965. It is worth pointing out, however, that the Sukarno-Suharto power transfer was an intra-military affair, and really a reassertion of the military’s power and authority against the populist Communist movement. (And, of course, supported by Western nations disturbed by Sukarno’s penchant for nationalizing resources.) I’m not defending Communism. But the fact of the matter is that an awful lot of Indonesian Communists joined the party because it represented the only real alternative to military rule, and by extension the status quo. For hundreds of thousands of landless, poverty-stricken people, there wasn’t any real endorsement of Marxism beyond a belief in the capacity for change.
Freddie posted a quotation from Gregg Easterbrook (whose writing I often find refreshingly counter-intuitive) about the success of the recent Indonesian elections. Easterbrook correctly laments the lack of coverage in the US press this event has received. Given that Indonesia, the world’s largest Muslim nation, has had peaceful democratic elections and voted down more hardline Islamist elements, you think this would receive praise from all the right-wing pundits always calling out “moderate” Muslims. Well I don’t know about moderate–how about just decent, good people–but here they are. And crickets chirp on the right. (Apparently that’s going around over at those parts lately).
A lesson to be gained from Indonesia is that responsive government that tries (sincerely) to work on anti-corruption and extending development to larger swaths of the society will very likely A)get voted back into office and B)tamp down any retrograde movements. Who wunda thunk it? The reason Islamism is so strong in the Arab world in particular is the total corruption and authoritarian suppression of humanity that is the political norm there: cf, Syria, Saudi Arabia, Egypt, to name just a few.
Mounting my hobby camel, it’s a point to remember that religion (particularly Islam) does not exist separate from politics. [This is to project the Western secular interpretation of things onto the Islamic world]. That is the religion is always mixed up in the actual history.
[…]
In particular I’d like to hone in on this symbol of Islam as spread by the sword. Particularly when it is contrasted as in this paragraph with non-violent forms of conversion (traders sharing their faith, people being attracted to that and joining of their own volition). It’s a very powerful rhetorical device and emotional symbol–particularly in current debates about US foreign policy and the state of the world politically–but it needs some unpacking (or perhaps sheathing?).
When discussing this issue it’s really important to make a distinction between the spread of Islam as a religious empire and the spread of the actual religion among the populace of said conquered areas. Of course yes in its early days Arab Muslim warriors went around and were highly successful militarily and conquered a huge empire–arguably the largest in world history–in a very short amount of time. By that measure yes the religion spread through military projection of might.
But the lesson to be drawn from such a piece of information is typically way overblown.
Christian missionaries to India, Japan, the New World, Africa rode on commercial vessels that were imperial (crown-approved politico-economic) projects. So I guess you could say Christianity was spread by the gun and the steel blade. Before that Christianity was spread by its promotion as the official religion of the Roman Empire.
Plus, we’re talking about the ancient and the medieval worlds. Everybody was spreading their power through violence.
Moreover, this classic view of Islam as “convert or die by the sword” really doesn’t comport with the history. For those interested I recommend Philip Jenkins’ book The Lost History of Christianity. The subtitle to which is The Thousand Year Golden Age of the Church in the Middle East, Africa, and Asia–And How it Died. Thousand years in the Middle East goes well past the initial arrival of Islam in the 7th century. Of course the numbers for that period are always hard to gauge, but Jenkins (and others) makes a compelling case that Christianity was for hundreds of years after Arab Islamic elites conquered and ruled the Middle East politically, the mass majority of the population was still Christian.
On the actual election, Peter Gelling at Global Post:
Indonesia’s second-ever direct presidential election, a major test for its still-evolving democracy, has commonly been described as dull. And that’s a good thing.
With the exception of complaints of bloated and fraudulent voter lists from the opposition, the elections passed peacefully and without incident. Incumbent President Susilo Bambang Yudhoyono, a reform-minded former general, was re-elected in one round with an impressive, though not surprising, 60 percent of the vote, according to a quick count released hours after the polls closed, but which is considered accurate.
His two challengers — Yusuf Kalla, his current vice president who will have to remain as such until October, and Megawati Sukarnoputri, a former president whom Yudhoyono already defeated once before in 2004, during the country’s first-ever direct election — finished with about 13 and 27 percent of the vote respectively.
Only 10 years ago, the country was in a political and economic tailspin. The Asian Financial Crisis, coupled with the institutionalized corruption made popular by Suharto, the country’s kleptocrat for 30 years, laid waste to any economic gains the general had previously made. Suharto was ousted after massive riots and for years the country struggled to find a leader who could bring stability. Add the rise of Islamic terrorism, and Indonesia looked destined to become another Pakistan.
Yudhoyono is not the most exciting of leaders, but in five years he managed to stabilize Indonesia, which is now a shining example to its neighbors and the region’s most impressive success story.
The election was Indonesia’s second direct presidential election and, as with the first, it was largely devoid of controversy or violence. Indonesian voters once again demonstrated their sophistication, with about 60 per cent (based on early, unofficial ‘quick counts’) voting for a leader they feel is honest and who has brought tangible improvements to their lives. The April parliamentary elections and Wednesday’s presidential election continue a process of evolution of political parties in Indonesia– a process marked by the decline of Soeharto-era parties and the inability of Islamic parties to expand their appeal beyond about a quarter of the electorate.
However, two aspects of the elections were less positive.
First, the elections were poorly administered, damaging the credibility of the electoral process and highlighting a disturbing laxness on the part of Indonesia’s political establishment with regard to safeguarding the quality of democratic processes. Second, there was a near total lack of ‘new blood’: all six candidates for the Presidency and Vice Presidency roseto prominence during theSoehartoera and three of them, including SBY, come from the military. A decade of democratic politics has yet to bring to the fore a new generation of national leaders free from tainted association with Soeharto.
Perhaps it is no exaggeration to say most Indonesians felt relieved Wednesday to hear news of the Constitutional Court verdict — binding and final — rejecting the lawsuits of the two losers in the July 8 presidential election.
All the petty squabbling over the results of the election ended Wednesday when the court announced its verdict, upholding the General Elections Commission’s (KPU) earlier decision to name Susilo Bambang Yudhoyono the winner of the July 8 election.
“While it can be proven in court that there were cases of election violations, there is not enough evidence to support the allegations that those violations were massive and systematic — a prerequisite to declare the election invalid,” Constitutional Court chief Moh. Mahfud M.D. said as he read the verdict.
The verdict nullified earlier claims made by the legal team of presidential candidate Megawati Soekarnoputri; that the July 8 election was full of violations, and that 28.6 million votes for Yudhoyono were not valid.
“The 28.6 million votes came from voters registered more than once, underage voters and even dead voters,” a member of Megawati’s legal team had said. “We believe the KPU awarded these invalid votes to Yudhoyono.”
The verdict also confirmed that despite all its weaknesses, the KPU had organized the July 8 election in a lawful and transparent manner. At least the KPU’s official results were on par with tallies provided by five separate survey groups offering quick count calculations.
Even in Asian investment circles, it can come as a surprise to learn that the world’s second-best performing stock market this year has been Indonesia’s.
Bloomberg did its bit on Thursday to publicise the fact, reporting that Indonesia’s stock index may return to the record reached last year in the next 12 months, led by automotive, banks and property stocks as falling interest rates boost growth.
That prognosis, offered by Batavia Prosperindo Aset Manajemen, one of Indonesia’s best performing funds over the past five years, follows a boom-bust-boom cycle for the Jakarta Composite Index, which hit a record high of 2,830.26 in January last year before plunging 61 per cent to its October low. Since then, the index has surged 68 per cent this year.