Tag Archives: Razib Khan

And The Silver Medal Goes To China… Or Does It?

Heather Horn at The Atlantic with the round-up

Ryan Avent at DiA at The Economist:

CHINA has, at long last, surpassed Japan in terms of nominal GDP, making the Chinese economy the world’s second largest. Second quarter output in China came in at $1.337 trillion, to Japan’s $1.288 trillion (Japan’s output was larger in the first quarter; for comparison, America’s second quarter nominal output was $3.522 trillion). The shift is sure to be widely discussed and widely misinterpreted. There are a few key things to mention.

First, while Chinese growth has been truly impressive in recent decades, the rapid overtaking of the Japanese economy also reflects years of disappointing growth there. This story is as much about Japan’s travails (and the risk to other rich economies facing a descent into Japanese-style stagnation) as it is China’s boom.

Second, China remains a very poor country in per capita terms. It uses over four times as many citizens as America to produce less than half America’s output. That’s a bit misleading—urban productivity in China doesn’t lag America by quite as much but is offset by the limited growth contribution of China’s hundreds of millions of rural poor. Still, the total output figures encourage observers to vastly overstate the developmental level of the Chinese economy.

Joshua Keating at Foreign Policy:

The world economy reached a major milestone Monday when China officially became the world’s second-largest economy, displacing Japan, which has held the title for more than four decades. The recognition of China’s new status came after the Japanese government reported that, after a quarter of slow economic growth, the country’s annual gross domestic product (GDP) was estimated to be around $1.28 trillion, slightly below China’s $1.33 trillion. Do all countries use the same method for estimating GDP?

They’re supposed to. The System of National Accounts (SNA), a set of guidelines developed jointly by the United Nations, the European Commission, the International Monetary Fund (IMF), the Organization for Economic Co-operation and Development, and the World Bank, specifies the methods by which countries measure the size of their economies.

There are two main methods for estimating GDP. One involves looking at production. This includes the value of the goods produced by all the firms in the country, the added value of government work projects, and — particularly in developing countries — the value of goods produced for personal consumption, like the crops grown by subsistence farmers. Not all wealth counts toward GDP. For instance, if you build a new house, that’s considered value added to the economy.  If a pre-existing house increases in value, the owner may be better off, but the country’s GDP is unaffected. Of course, companies often have a vested interest in exaggerating their profits, so reliable figures can sometimes be tough to calculate.

The other method of calculating GDP involves measuring total consumption of products by a country’s population. Since it relies mostly on household surveys, this method also has flaws. People tend to underreport the amount they spend on alcohol and cigarettes, for instance. But hopefully, the two measures should come up with close to the same number and when the results from the two approaches are compiled, they should give you a pretty good idea of the size of a country’s economy.

[…]

But for most countries, there’s no international legal authority to ensure that statistical offices are following the SNA guidelines, and international economists largely have to rely on self-reported numbers. While no one’s disputing China’s new status, the country has often been suspected of cooking its books. Although China is not a member of the OECD, it does cooperate with the organization in producing statistics according to the SNA guidelines.

Those guidelines are updated every few years. The most recent edition, which was made in 2008 and has so far only been implemented by Australia, was revised so that a firm’s investments in research and development are considered added value. This means that as the new standard is implemented worldwide over the next four years or so, many countries will see their GDP numbers increase by as much as 1 percent. That’s one way to stimulate growth.

Joe Weisenthal at Business Insider:

Let’s just put some of today’s headlines about Japan’s GDP being surpassed by Chinese GDP in perspective.

In the quarter, Japan had economic output of $1.28 trillion, or $10,085 per capital, based on a population of 127 million.

China?

It had economic output of $1.337 trillion for the quarter, but a population of about $1.3 billion, so per-capita output of… $1000, about a 1/10th as big.

Let us know when China passes Albania.

Derek Scissors at Heritage:

It’s true that simple GDP does matter. The increasing size of China’s economy means the entire world is now affected by its voracious demand for oil, iron ore, and other commodities, as well as its low-cost supply of consumer electronics, clothing, and other goods.

But for successful economic development, what matters far more is the wealth of individuals and families. Japanese economic weakness is not shown in its still impressive 3rd place in world GDP but in its roughly 40th place on measures of personal income. From an economy once thought better managed and better performing than the U.S., the average citizen of Japan is now poorer than the average citizen of Mississippi. American citizens are noticeably richer than citizens of most other developed countries, such as in the EU. But Japan, in particular, is moving backward.

In contrast to Japan’s 20 years of weakness, there has been stunning growth in Chinese GDP per capita for 30 years. Yet China is still a developing economy. Chinese GDP per capita, even adjusted for purchasing power, is about 15 percent the level of the U.S. Further, GDP per capita actually exaggerates China’s performance.

The PRC’s incomplete data revisions undermine comparisons but, from the middle of 2000 to the middle of 2010, GDP per capita increased by more than 9500 yuan or, at present exchange rates, another $2800 in annual income. However, urban disposable income increased less than 6800 yuan, or about $2000 in annual income. And rural income increased less than 2000 yuan, or $600 in annual income.

Razib Khan at Discover

Robert Reich at Wall Street Pit:

Think of China as a giant production machine that’s growing 10 percent a year (this year, somewhat less). The machine sucks in more and more raw materials and components from rest of world – it’s now the world’s #1 buyer of iron ore and copper, and close to the #1 importer of crude oil – and spews out a growing mountain of stuff, along with huge environmental problems.

But because the Chinese consume a smaller and smaller proportion of this stuff, it has to be exported to consumers elsewhere (Europe, North America, Japan) to keep the Chinese working. Much of the money China earns by selling it around the world is reinvested in factories, roads, trains, and power plants that enlarge China’s capacity to produce far more. Another big portion is lent to or invested in the rest of the world (helping to finance America’s budget deficit at very low cost).

But this can’t go on. China’s workers won’t allow it. Workers in other nations who are losing their jobs won’t allow it, either.

The answer is not simply more labor agitation in China or an upward revaluation of China’s currency relative to the dollar. The problem is bigger. All over the world, we’re witnessing a growing gap between production and consumption, while the environment continues to degrade. The Chinese machine is fast heading for a breakdown only because it’s growing fastest.

Leave a comment

Filed under China

Another Week, Another Ross Douthat Column

Ross Douthat at NYT:

There’s an America where it doesn’t matter what language you speak, what god you worship, or how deep your New World roots run. An America where allegiance to the Constitution trumps ethnic differences, language barriers and religious divides. An America where the newest arrival to our shores is no less American than the ever-so-great granddaughter of the Pilgrims.

But there’s another America as well, one that understands itself as a distinctive culture, rather than just a set of political propositions. This America speaks English, not Spanish or Chinese or Arabic. It looks back to a particular religious heritage: Protestantism originally, and then a Judeo-Christian consensus that accommodated Jews and Catholics as well. It draws its social norms from the mores of the Anglo-Saxon diaspora — and it expects new arrivals to assimilate themselves to these norms, and quickly.

These two understandings of America, one constitutional and one cultural, have been in tension throughout our history. And they’re in tension again this summer, in the controversy over the Islamic mosque and cultural center scheduled to go up two blocks from ground zero.

The first America, not surprisingly, views the project as the consummate expression of our nation’s high ideals. “This is America,” President Obama intoned last week, “and our commitment to religious freedom must be unshakeable.” The construction of the mosque, Mayor Michael Bloomberg told New Yorkers, is as important a test of the principle of religious freedom “as we may see in our lifetimes.”

The second America begs to differ. It sees the project as an affront to the memory of 9/11, and a sign of disrespect for the values of a country where Islam has only recently become part of the public consciousness. And beneath these concerns lurks the darker suspicion that Islam in any form may be incompatible with the American way of life.

This is typical of how these debates usually play out. The first America tends to make the finer-sounding speeches, and the second America often strikes cruder, more xenophobic notes. The first America welcomed the poor, the tired, the huddled masses; the second America demanded that they change their names and drop their native languages, and often threw up hurdles to stop them coming altogether. The first America celebrated religious liberty; the second America persecuted Mormons and discriminated against Catholics.

But both understandings of this country have real wisdom to offer, and both have been necessary to the American experiment’s success. During the great waves of 19th-century immigration, the insistence that new arrivals adapt to Anglo-Saxon culture — and the threat of discrimination if they didn’t — was crucial to their swift assimilation. The post-1920s immigration restrictions were draconian in many ways, but they created time for persistent ethnic divisions to melt into a general unhyphenated Americanism.

The same was true in religion. The steady pressure to conform to American norms, exerted through fair means and foul, eventually persuaded the Mormons to abandon polygamy, smoothing their assimilation into the American mainstream. Nativist concerns about Catholicism’s illiberal tendencies inspired American Catholics to prod their church toward a recognition of the virtues of democracy, making it possible for generations of immigrants to feel unambiguously Catholic and American.

So it is today with Islam. The first America is correct to insist on Muslims’ absolute right to build and worship where they wish. But the second America is right to press for something more from Muslim Americans — particularly from figures like Feisal Abdul Rauf, the imam behind the mosque — than simple protestations of good faith.

Too often, American Muslim institutions have turned out to be entangled with ideas and groups that most Americans rightly consider beyond the pale. Too often, American Muslim leaders strike ambiguous notes when asked to disassociate themselves completely from illiberal causes.

Jennifer Rubin at Commentary:

Granted, the “conservative spot” on the Gray Lady’s op-ed pages comes with plenty of caveats and handcuffs. So if a conservative columnist is going to last more than a year, he will have to suppress his harshest impulses toward the left and a great deal of his critical faculties. The result is likely to be condescending columns like today’s by Ross Douthat.

He posits two Americas: “The first America tends to make the finer-sounding speeches, and the second America often strikes cruder, more xenophobic notes.” The first cares about the Constitution, and the second is composed of a bunch of racist rubes, it seems. “The first America celebrated religious liberty; the second America persecuted Mormons and discriminated against Catholics.” Yes, you can guess which are the opponents of the Ground Zero mosque. (I was wondering if he was going to write, “The first America helped little old ladies across the street; the second America drowned puppies.)

I assume that this is what one has to do to keep your piece of turf next to such intellectual luminaries as Maureen Dowd, but it’s really the worst straw man sort of argument since, well, the last time Obama spoke. But he’s not done: “The first America is correct to insist on Muslims’ absolute right to build and worship where they wish. But the second America is right to press for something more from Muslim Americans — particularly from figures like Feisal Abdul Rauf, the imam behind the mosque — than simple protestations of good faith.” OK, on behalf of the rubes in Second America, enough!

Second America — that’s 68% of us — recognizes (and we’ve said it over and over again) that there may be little we can do legally (other than exercise eminent domain) to halt the Ground Zero mosque, but that doesn’t suspend our powers of judgment and moral persuasion. Those who oppose the mosque are not bigots or constitutional ruffians. They merely believe that our president shouldn’t be cheerleading the desecration of “hallowed ground” (”first America’s” term, articulated by Obama) or averting our eyes from the funding sources of the imam’s planned fortress.

E.D. Kain at Balloon Juice:

Leaving aside the obvious fact that Muslims have actually been migrating here for many years and sprouting up second and third and seventh generations in the United States, this use of a specific instance – the Cordoba Center – to segue into a larger framework in which American Muslims writ large are not doing enough to assimilate is, to put it bluntly, nonsense. (And are no American Muslims a part of Second America? Then they must all be part of First America…unless we’re working on creating a Third America. That’s possible, too.)

He goes on:

Too often, American Muslim institutions have turned out to be entangled with ideas and groups that most Americans rightly consider beyond the pale. Too often, American Muslim leaders strike ambiguous notes when asked to disassociate themselves completely from illiberal causes.

I wonder what exactly qualifies as ‘too often’? What percentage of Muslim institutions fit this criteria? Furthermore, what bearing does this have on the question of the Ground Zero Mosque?

For Muslim Americans to integrate fully into our national life, they’ll need leaders who don’t describe America as “an accessory to the crime” of 9/11 (as Rauf did shortly after the 2001 attacks), or duck questions about whether groups like Hamas count as terrorist organizations (as Rauf did in a radio interview in June). And they’ll need leaders whose antennas are sensitive enough to recognize that the quest for inter-religious dialogue is ill served by throwing up a high-profile mosque two blocks from the site of a mass murder committed in the name of Islam.

They’ll need leaders, in other words, who understand that while the ideals of the first America protect the e pluribus, it’s the demands the second America makes of new arrivals that help create the unum.

Leaders like this guy, perhaps? I mean, if we’re going to just lump everyone of a particular faith together and cherry-pick the ‘leaders’ who we feel best represent them, why not pick the loudest of the bunch?

And if we can identify the group’s leaders, then we can pigeonhole the entire population’s motives. We can attribute the words of the few to the motives of the many. We can rile up “second America” against the fearful Other. And we can do it all quite nicely by calling into question the sincerity of the group’s desire to properly integrate into mainstream culture. It’s their fault, after all, that they haven’t made it all the way. Why would any real American want to build a mosque so near ground zero?

Jamelle Bouie at Tapped:

But this is bad history; the nativists of 19th-century America weren’t much interested in having “new arrivals adapt to Anglo-Saxon culture,” rather, the nativists of mid-19th-century America wanted to keep immigrants off of American shores. In its 1856 platform, the American Party — otherwise known as the “Know-Nothing Party” — pushed for the mass expulsion of poor immigrants, and declared that “Americans must rule America, and to this end native-born citizens should be selected for all State, Federal, and municipal offices of government employment, in preference to all others.”Likewise, nativism in the late 19th century was preoccupied with keeping foreigners out of the United States. Here is a passage from the constitution the Immigration Restriction League, formed in 1894 by a handful of Harvard graduates:

The objects of this League shall be to advocate and work for further judicious restriction or stricter regulation of immigration, to issue documents and circulars, solicit facts and information on that subject, hold public meetings, and to arouse public opinion to the necessity of a further exclusion of elements undesirable for citizenship or injurious to our national character.

This seems completely obvious, but nativists and xenophobes have never been interested in seeing immigrants join our nation and culture as Americans. Our modern-day nativists — as represented by the previously mentioned Tea Party activists — see “undesirable” immigrants as pests to be dealt with, not potential Americans:

“Instead of finding bugs in our beds, we’re finding home invaders,” said Tony Venuti, a Tucson radio host who attached a huge sign to the fence that told immigrants to head to Los Angeles, where they will be more welcome, and even offered directions for getting there.

Contra Douthat, nativists and xenophobes have never been integral to assimilating immigrants. That distinction goes to the assimilationists of American life who understood — and understand — that “American-ness” can be learned and adopted. Different assimilationists had different approaches to bringing immigrants into American life, but they were united by a common view of America as an open society.

Jonathan Bernstein:

Jamelle Bouie has a great post up this morning about assimilation and immigration, riffing off of Ross Douthat’s column.  Douthat’s claim is that the America of high-minded ideals is at odds with cultural protectionism, and while the latter is bigoted and small-minded, it also winds up having the virtue of forcing newer immigrants and minorities in general to conform to American cultural norms (including those high-minded ideals).  I think Bouie is a bit harsher than necessary to Douthat, who isn’t exactly warm towards those who he says use discrimination and persecution to get their way.  But I also think Bouie is correct: Douthat’s claim that it’s the nativists who have indirectly encouraged assimilation through intimidation may not be entirely wrong, but it’s a somewhat strained reading of history — the nativists didn’t want assimilation, they wanted (and often got) exclusion.  And Bouie is right that Douthat’s history ignores that those in Douthat’s “first” America (the one with the high-minded ideals) have almost always supported and worked to achieve assimilation.

But I think both of them are missing the main actors here: the immigrants themselves, who in almost all cases have been pretty desperate to assimilate as quickly as possible.  That was true of the great immigration waves in the late 19th and early 20th centuries, and it’s true of the great immigration wave now.  Of course, each group has had various cultural bits and pieces they keep with them (bits and pieces which generally are gobbled up by the larger American culture, so that everyone eats tacos and bagels), and each group has minorities within their minority who resist assimilation, keeping the old language and practices alive (although often radically altered, sometimes without anyone realizing it) even as most of the community drifts — runs — towards America.

Matt Welch at Reason:

Such John Edwards-style reductionism inevitably sends off alarm bells, but this paragraph in particular smelled funny to me:

[B]oth understandings of this country have real wisdom to offer, and both have been necessary to the American experiment’s success. During the great waves of 19th-century immigration, the insistence that new arrivals adapt to Anglo-Saxon culture — and the threat of discrimination if they didn’t — was crucial to their swift assimilation. The post-1920s immigration restrictions were draconian in many ways, but they created time for persistent ethnic divisions to melt into a general unhyphenated Americanism.

Is this true? To find out I asked an old college newspaper buddy of mine, the immigration historian Christina Ziegler-McPherson, who is author of a recent book called Americanization in the States: Immigrant Social Welfare Policy, Citizenship, and National Identity in the United States, 1908-1929. She e-mailed me back 2,500 words; thought I’d pass along a few of them:

Douthat is full of crap in several ways:

1. […] [F]or much of the 19th century, except in the big cities like New York, immigrants and natives had little contact and less competition with one another, because the country was growing and was so physically big. […]

This is not to discount the nativism (i.e. the Know Nothing party) of the mid-1850s but that was a city phenomenon and was driven mostly by anti-Catholicism inspired by famine Irish immigration. Some people didn’t like “clannish” Germans but as long as they weren’t Catholic, no one complained as much. Nativism in the mid-19th century was basically an anti-Irish phenomenon. AND, in some ways, it wasn’t anti-immigrant, just anti-Catholic, and sought to slow down the integration of immigrants into the polity (i.e., by requiring a much longer period of residency before naturalization, and this was as much an elite anti-machine politics idea as anti-Irish or anti-immigrant).

Also, there was no real “national” culture until after the Civil War (and this developed gradually with industrialism and the spread of a mass media and eventually mass consumption) so there could be no “insistence” on immigrants assimilating. Who the heck is he talking about? […]

2. Nativism, and some aspects of the Americanization movement of the WWI period (especially the more coercive stuff) has always had the effect of making immigrants cling more tightly to their cultures, their languages, traditions. This is both basic psychology and is historically accurate and can be documented for many groups.

Any attack on religion (which frankly, is what anti-Muslim talk is, it’s not anti-ethnic, because there’s no ethnic group called “Muslim”) encourages more orthodoxy, not less, and is totally counter-preductive, because of the 1st Amendment. The American Catholic Church became the authoritarian institution that it was in the 19th and early 20th centuries in large part because of Anglo-American Protestants insisting that Protestantism and Americanism were synonymous and attacking Irish Catholics. […]

[T]he harder you push for “assimilation”…the more you get orthodoxy, extremism, alienation.

3. Post-WWI restrictions were separate from the Americanization movement and were not designed to encourage assimilation (although a few people did realize that assimilation might happen if immigrants were cut off from rejuvenating contact with their home cultures). The 1924 and 1929 restrictions were explicitly racist (and I mean that in the 19th century biological sense, as in, we don’t want our blood being contaminated by alien blood which is different and is incompatible with ours.)…Eugenics heavily influenced the 1924 and 1929 acts and eugenicists were the statisticians who determined the specific quotas for each group. […]

The problem of course with Douthat, besides that he has no idea about what he’s talking about, is he’s so vague. When in the 19th century? Which groups? Where? What created these “persistent ethnic divisions”? Are these institutional, cultural, created by policy? Who the heck can tell?

Alex Knapp:

First off all, you’ll note that Little Italy’s and Chinatowns still exist all over the country. There are neighborhoods on the East Coast where you’re lost if you don’t speak Italian, and neighborhoods on the West Coast where you’re lost if you don’t speak Chinese. There are people living in these neighborhoods who are still hostile to outsiders, and lots of different ethnic neighborhoods share this characteristic.And it’s important to realize that these ethnic enclaves, with their insularity and hostility to integration, not only failed to “swiftly assimilate”, they failed to swiftly assimilate because of discrimination. Because of the law and because of cultural prejudice, Italians, Chinese, Irish, Slavs, Jews and other immigrants were very often not hired by their neighbors. As a consequence, Italians hired Italians, Chinese hired Chinese, Irish hired Irish, etc. Immigrant neighborhoods were often either ignored by the police or shaken down by them for protection money. In either case, in a desperate desire for order, immigrants turned to organized crime for protection from criminals or the police. While the Mafioso were brutal, greedy and ruthless, they also kept order on the streets and took care of widows, etc. (You can actually see a similar pattern in Palestine, where Hamas was voted into power as not only a reaction against Israel and the PLO, but also because while Arafat’s government was growing rich and corrupt on foreign aid payments, Hamas was building schools and medical clinics for the destitute.)

Indeed, the combination of the rise of organized crime and the hositility from “second America” more likely delayed the integration of immigrant communities. That integration really didn’t start to happen until various immigrant populations simply became numerous enough to vote their preferred candidates into office, such as the experience of the Irish in Boston.

Another example of Douthat’s willful glossing over of history comes in his discussion of the Mormon experience:

The same was true in religion. The steady pressure to conform to American norms, exerted through fair means and foul, eventually persuaded the Mormons to abandon polygamy, smoothing their assimilation into the American mainstream.

This is a great example of how to write something that’s factually true, but rhetorically false. Given his tone, you’d think that Mormon families were getting some glares and “tsks tsks” at PTA meetings. The reality, of course, is that Mormons were violently persecuted, first by their neighbors in Illinois and Missouri, and then by the U.S. Army after they moved to Utah. The Mormons weren’t “persuaded” to abandon polygamy, they were forced to after the United States Congress disincorporated the Church and seized all Mormon assets. Mormon leaders fought the Act in the Courts, but the Supreme Court ultimately upheld Congress’ Act. It was only then that the Mormons capitulated to the government. And it was a long time before Mormons got over that and became more assimilated into every day American life. And even at that, there was considerable hostility among quarters in the Republican Party against Mitt Romney because of his religion.

I definitely agree that, as a culture, Americans should encourage the integration of immigrant populations into every day life. But that integration isn’t built on fear and peer pressure. It’s built on tolerance, a shared ideal of freedom, and the embrace of new cultures into the rich tapestry of American life. Integration comes from delicious foods at Indian buffets and the required learning about American government before an immigrant takes his oath of citizenship. It certainly doesn’t come from protesting Mosques or putting up No Irish Need Apply signs on the door of your business.

UPDATE: Conor Friedersdorf at Andrew Sullivan’s place

Douthat responds to Friedersdorf

Razib Khan at Secular Right

1 Comment

Filed under History, Immigration, Mainstream, New Media, Religion

“Don’t Trust One-Offs”

Jim Manzi in City Journal:

[…]

Another way of putting the problem is that we have no reliable way to measure counterfactuals—that is, to know what would have happened had we not executed some policy—because so many other factors influence the outcome. This seemingly narrow problem is central to our continuing inability to transform social sciences into actual sciences. Unlike physics or biology, the social sciences have not demonstrated the capacity to produce a substantial body of useful, nonobvious, and reliable predictive rules about what they study—that is, human social behavior, including the impact of proposed government programs.

The missing ingredient is controlled experimentation, which is what allows science positively to settle certain kinds of debates. How do we know that our physical theories concerning the wing are true? In the end, not because of equations on blackboards or compelling speeches by famous physicists but because airplanes stay up. Social scientists may make claims as fascinating and counterintuitive as the proposition that a heavy piece of machinery can fly, but these claims are frequently untested by experiment, which means that debates like the one in 2009 will never be settled. For decades to come, we will continue to be lectured by what are, in effect, Keynesian and non-Keynesian economists.

Over many decades, social science has groped toward the goal of applying the experimental method to evaluate its theories for social improvement. Recent developments have made this much more practical, and the experimental revolution is finally reaching social science. The most fundamental lesson that emerges from such experimentation to date is that our scientific ignorance of the human condition remains profound. Despite confidently asserted empirical analysis, persuasive rhetoric, and claims to expertise, very few social-program interventions can be shown in controlled experiments to create real improvement in outcomes of interest.

[…]

After reviewing experiments not just in criminology but also in welfare-program design, education, and other fields, I propose that three lessons emerge consistently from them.

First, few programs can be shown to work in properly randomized and replicated trials. Despite complex and impressive-sounding empirical arguments by advocates and analysts, we should be very skeptical of claims for the effectiveness of new, counterintuitive programs and policies, and we should be reluctant to trump the trial-and-error process of social evolution in matters of economics or social policy.

Second, within this universe of programs that are far more likely to fail than succeed, programs that try to change people are even more likely to fail than those that try to change incentives. A litany of program ideas designed to push welfare recipients into the workforce failed when tested in those randomized experiments of the welfare-reform era; only adding mandatory work requirements succeeded in moving people from welfare to work in a humane fashion. And mandatory work-requirement programs that emphasize just getting a job are far more effective than those that emphasize skills-building. Similarly, the list of failed attempts to change people to make them less likely to commit crimes is almost endless—prisoner counseling, transitional aid to prisoners, intensive probation, juvenile boot camps—but the only program concept that tentatively demonstrated reductions in crime rates in replicated RFTs was nuisance abatement, which changes the environment in which criminals operate. (This isn’t to say that direct behavior-improvement programs can never work; one well-known program that sends nurses to visit new or expectant mothers seems to have succeeded in improving various social outcomes in replicated independent RFTs.)

And third, there is no magic. Those rare programs that do work usually lead to improvements that are quite modest, compared with the size of the problems they are meant to address or the dreams of advocates.

Razib Khan at Discover Magazine:

A friend once observed that you can’t have engineering without science, making the whole concept of “social engineering” somewhat farcical. Jim Manzi has an article in City Journal which reviews the checkered history of scientific methods as applied to humanity, What Social Science Does—and Doesn’t—Know: Our scientific ignorance of the human condition remains profound.

The criticisms of a scientific program as applied to humanity are deep, and two pronged. As Manzi notes the “causal density” of human phenomena make teasing causation from correlation very difficult. Additionally, the large scale and humanistic nature of social phenomena make them ethically and practically impossible to apply methods of scientific experimentation. This is why social scientists look for “natural experiments,” or involve extrapolation from “WEIRD” subject pools. But as Manzi notes many of the correlations themselves are highly context sensitive and not amenable to replication.

Arnold Kling:

If David Brooks is going to give out his annual awards for most important essays, I would nominate this one.

One of the lessons that is implicit in the essay (and that I think that Manzi ought to make explicit) is, “Don’t trust one-offs.” That is, do not draw strong conclusions based on a single experiment, no matter how well constructed. Instead, wait until many experiments have been conducted, in a variety of settings and using a variety of techniques. An example of a one-off that generated a lot of recent excitement is the $320,000 kindergarten teacher study.

Mark Kleiman:

I’m sorry, but this is incoherent. What is this magical “trial-and-error process” that does what scientific inquiry can’t do? On what basis are we to determine whether a given trial led to successful or unsuccessful results? Uncontrolled before-and-after analysis, with its vulnerability to regression toward the mean? And where is the mystical “social evolution” that somehow leads fit policies to survive while killing off the unfit?

Without any social-scientific basis at all (unless you count Gary Becker’s speculations) we managed to expand incarceration by 500 percent between 1975 and the present. Is that fact – the resultant of a complicated interplay of political, bureaucratic, and professional forces – to be accepted as evidence that mass incarceration is a good policy, and the “counter-intuitive” finding that, past a given point, expanding incarceration tends, on balance, to increase crime be ignored because it’s merely social science? Should the widespread belief, implemented in policy, that only formal treatment cures substance abuse cause us to ignore the evidence to the contrary provided by both naturalistic studies and the finding of the HOPE randomized controlled trial that consistent sanctions can reliably extinguish drug-using behavior even among chronic criminally-active substance abusers?

For some reason he doesn’t specify, Manzi regards negative trial results as dispositive evidence that social innovators are silly people who don’t understand “causal density.” So he accepts – as well he should – the “counter-intuitive” result that juvenile boot camps were a bad idea. But why are those negative results so much more impressive than the finding that raising offenders’ reading scores tends to reduce their future criminality?

Surely Manzi is right to call for metholological humility and catholicism; social knowledge does not begin and end with regressions and controlled trials. But the notion that prejudices embedded in policies reflect some sort of evolutionary result, and therefore deserve our respect when they conflict with the results of careful study, really can’t be taken seriously.

Manzi responds at The American Scene:

This leads Kleiman to ask:

What is this magical “trial-and-error process” that does what scientific inquiry can’t do? On what basis are we to determine whether a given trial led to successful or unsuccessful results? Uncontrolled before-and-after analysis, with its vulnerability to regression toward the mean? And where is the mystical “social evolution” that somehow leads fit policies to survive while killing off the unfit?

I devoted a lot of time to this related group of questions in the forthcoming book. The shortest answer is that social evolution does not allow us to draw rational conclusions with scientific provenance about the effectiveness of various interventions, for methodological reasons including those that Kleiman cites. Social evolution merely renders (metaphorical) judgments about packages of policy decisions as embedded in actual institutions. This process is glacial, statistical and crude, and we live in the midst of an evolutionary stream that we don’t comprehend. But recognition of ignorance is superior to the unfounded assertion of scientific knowledge.

Kleiman then goes on to ask this:

Without any social-scientific basis at all (unless you count Gary Becker’s speculations) we managed to expand incarceration by 500 percent between 1975 and the present. Is that fact – the resultant of a complicated interplay of political, bureaucratic, and professional forces – to be accepted as evidence that mass incarceration is a good policy, and the “counter-intuitive” finding that, past a given point, expanding incarceration tends, on balance, to increase crime be ignored because it’s merely social science?

My answer is yes, it should be counted as evidence, but that it is not close to dispositive. We can not glibly conclude that we now live in the best of all possible worlds. I devoted several chapters to trying to lay out some principles for evaluating when, why and how we should consider, initiate and retrospectively evaluate reforms to our social institutions.

Kleiman’s last question is:

Should the widespread belief, implemented in policy, that only formal treatment cures substance abuse cause us to ignore the evidence to the contrary provided by both naturalistic studies and the finding of the HOPE RCT that consistent sanctions can reliably extinguish drug-using behavior even among chronic criminally-active substance abusers?

My answer to this is no, and a large fraction of the article (and the book) is devoted to making the case that exactly such randomized trials really are the gold standard for the kind of knowledge that is required to make reliable, non-obvious predictions that rationally outweigh settled practice and even common sense. The major caveat to the evaluation of this specific program (about which Kleiman is deeply expert) is whether or not the experiment has been replicated, as I also make the argument that replication is essential to drawing valid conclusions from such experiments – the principle that Arnold Kling called in a review of the article, “Don’t trust one-offs.”

Steven Pearlstein at WaPo

Steve Sailer:

That all sounds plausible, but I’ve been a social science stats geek since 1972, when the high school debate topic that year was education, so I’m aware that Manzi’s implications are misleading.

First, while experiments are great, correlation studies of naturally occurring data can be extremely useful. Second, a huge number of experiments have been done in the social sciences.

Third, the social sciences have come up with a vast amount of knowledge that is useful, reliable, and nonobvious, at least to our elites.

For example, a few years, Mayor Bloomberg and NYC schools supremo Joel Klein decided to fix the ramshackle admissions process to the gifted schools by imposing a standardized test on all applicants. Blogger Half Sigma immediately predicted that the percentage of Asians and whites admitted would rise at the expense of blacks and Hispanics, which would cause a sizable unexpected political problem for Bloomberg and Klein. All that has come to pass.

This inevitable outcome should have been obvious to Bloomberg and Klein from a century of social science data accumulation, but it clearly was not obvious to them.

No, the biggest problem with social science research is not methodological; it’s that we just don’t like the findings. The elites of America don’t like what the social sciences have uncovered about, say, crime, education, discrimination, immigration, and so forth.

Andrew Sullivan:

But there is a concept in this crucial conservative distinction between theoretical and practical wisdom that has been missing so far: individual judgment. A social change can never be proven in advance to be the right answer to a pressing problem. We can try to understand previous examples; we can examine large randomized trials; but in the end, we have to make a judgment about the timeliness and effectiveness of certain changes. It is the ability to sense when such a moment is ripe that we used to call statesmanship. It is that quality that no wonkery can ever replace.

It is why we elect people and not algorithms.

Will Wilkinson:

In my thinking about the contrasts between Rawlsian and Hayekian liberalism, I’ve begun to think about the former as the “liberalism of respect” and the latter as the “liberalism of discovery.” The liberalism of discovery recognizes the pervasiveness of our ignorance and the necessity of liberty for the emergence of useful knowledge. I would argue that the ideal of a social order embodying respect for persons as free and equal–the ideal of the liberalism of respect–comes to seem appealing only after a society has attained a certain level of economic development and general education, and these are largely consequences of a prior history of the relatively free play of the mechanisms of discovery celebrated by liberals like Hayek and Jim. But liberals of respect have tended to overlook the conditions under which people come to find the their favored ideal worth aspiring to, and so have tended to fail to acknowledge in their theories of justice the role of the institutions of discovery in creating and maintaining a society of mutual respect and fair reciprocity.

Via Sullivan, Kleiman responds to Manzi:

I suppose I’ll have to read Manzi’s book to find out how existing practices constitute “(metaphorical) judgments about packages of policy decisions;” I’m inclined to regard them as mostly mere resultants-of-forces, with little claim to deference. (Thinking that existing arrangements somehow embody tacit knowledge is a different matter from thinking that big changes are likely to have unexpected consequences, mostly bad, though both are arguments for caution about grand projects.)

I’m also less unimpressed than Manzi is with how much non-obvious stuff about humans living together the social sciences have already taught us. That supply and demand will, without regulation, come into equilibrium at some price was a dazzling and radical social-scientific claim when Adam Smith and his friends suggested it. So too for Ricardo’s analysis of comparative advantage, which, while it doesn’t fully support the free-trade religion that has grown up around it, at least creates a reasonable presumption that trade is welfare-increasing.

The superiority of reward to punishment in changing behavior; the importance of cognitive-dissonance and mean-regression effects in (mis)shaping individual and social judgments; the intractable problem of public-goods contributions; the importance of social capital; the problems created by asymmetric information and the signaling processes it supports; the crucial importance of focal points; the distinction between positive-feedback and negative-feedback processes; the distinction between zero-sum and variable-sum games; the pervasiveness of imperfect rationality in the treatment of risk and of time-value, and the consequent possibility that people will, indeed, damage themselves voluntarily: none of these was obvious when proposed, and all of them are now, I claim, sufficiently well-established to allow us to make policy choices based on them, with some confidence about likely results. (So, for that matter, is the Keynesian analysis of insufficient demand and what to do about it.)

But, if I read Manzi’s response correctly, my original comment allowed a merely verbal disagreement to exaggerate the extent of the underlying substantive disagreement. If indeed Manzi can offer some systematic analysis of how to look at existing institutions, figure out which ones might profitably be changed, try out a range of plausible changes, gather careful evidence about the results of those changes, and modify further in light of those results, then Manzi proposes what I would call a “scientific” approach to making public policy.

Manzi responds to Kleiman:

I think that he is reading my response correctly. While I don’t think that “all I meant” was that “you shouldn’t read some random paper in an economics or social-pysch journal” and propose X, I certainly believe that. Most important, I acknowledge enthusiastically his “sauce for the goose is sauce for the gander” point that the recognition of our ignorance should apply to things that I theorize are good ideas, as much as it does to anything else. The law of unintended consequences does not only apply to Democratic proposals.

In fact, I have argued for supporting charter schools instead of school vouchers for exactly this reason. Even if one has the theory (as I do) that we ought to have a much more deregulated market for education, I more strongly hold the view that it is extremely difficult to predict the impacts of such drastic change, and that we should go one step at a time (even if on an experimental basis we are also testing more radical reforms at very small scale). I go into this in detail for the cases of school choice and social security privatization in the book.

Megan McArdle:

I have been reading with great interest the back-and-forth between Mark Kleiman and Jim Manzi on how much more humble we ought to be about new policy changes.  I know and like both men personally, as well as having a healthy respect for two formidable intellects, so I’ve greatly enjoyed the exchange.

Naturally, this has put me in mind of just how hard it is to predict policy outcomes–how easy it is to settle on some intuitively plausible outcome, without considering some harder-to-imagine countervailing force.

Consider the supply-siders.  The thing is intuitively appealling; when we get more money from working, we ought to be willing to.  And it is a mathematical truism that revenue must maximize at some point.  Why couldn’t we be on the right-hand side of the Laffer Curve?

It was entirely possible that we were; unfortunately, it wasn’t true.  And one of the reasons that supply-siders failed was that they were captivated by that one appealing intuition.  In economics, it’s known as the “substitution effect”–as your wages go up, leisure becomes relatively more expensive relative to work, so you tend to do less of the former, more of the latter.

Unfortunately, the supply-siders missed another important effect, known as the “income effect”.  Which is to say that as you get richer, you demand more of some goods, and less of others.  And one of the goods you demand more of as you get richer–a class of goods known as “superior goods”–is leisure.

Of course, some people are so driven that they will simply work until they drop in the traces.  But most people like leisure.  So say you raise the average wage by 10%.  Suddenly people are bringing home 10% more income every hour.  Now, maybe this makes them all excited so they decide to work more.  On the other hand, maybe they decide they were happy at their old income, and now they can enjoy their old income while working 9% fewer hours.  Cutting taxes could actually reduce total output.

(We will not go into the question of how much most people can control their hours–on the one hand, most people can’t, very well, but on the other hand, those who can tend to be the high-earning types who pay most of your taxes.)

Which happens depends on which effect is stronger.  In practice, apparently neither was strong enough to thoroughly dominate, at least not when combined with employers who still demanded 40 hour weeks.  You do probably get a modest boost to GDP from tax cuts.  But you also get falling tax revenue.

Naturally, even-handedness demands that I here expose the wrong-headedness of some liberal scheme.  And as it happens, I have one all ready in the oven here:  the chimera of reducing emergency room use.  The argument that health care reform could somehow at least partially pay for itself by keeping people from using the emergency room was always dubious.  As I, and others argued, there’s not actually that much evidence that people use the emergency room because they are uninsured–rather than because they have to work during normal business hours, are poor planners, or are afraid that immigration may somehow find them at a free clinic.

Moreover, we argued, non-emergent visits to the emergency room mostly use the spare capacity of trauma doctors; the average cost may be hundreds of dollars, but the marginal cost of slotting ear infections in when you don’t happen to have a sucking chest wound, is probably pretty minimal.

But even I was not skeptical enough to predict what actually happened in Massachusetts, which is that emergency room usage went up after they implemented health care reform.

Leave a comment

Filed under Go Meta

Popping Out Children Like Brooms In The Sorcerer’s Apprentice Part Of “Fantasia”

Bryan Caplan at Wall Street Journal:

Amid the Father’s Day festivities, many of us are privately asking a Scroogely question: “Having kids—what’s in it for me?” An economic perspective on happiness, nature and nurture provides an answer: Parents’ sacrifice is much smaller than it looks, and much larger than it has to be.

Most of us believe that kids used to be a valuable economic asset. They worked the farm, and supported you in retirement. In the modern world, the story goes, the economic benefits of having kids seem to have faded away. While parents today make massive personal and financial sacrifices, children barely reciprocate. When they’re young, kids monopolize the remote and complain about the food, but do little to help around the house; when you’re old, kids forget to return your calls and ignore your advice, but take it for granted that you’ll continue to pay your own bills.

Many conclude that if you value your happiness and spending money, the only way to win the modern parenting game is not to play. Low fertility looks like a sign that we’ve finally grasped the winning strategy. In almost all developed nations, the total fertility rate—the number of children the average woman can expect to have in her lifetime—is well below the replacement rate of 2.1 children. (The U.S. is a bit of an outlier, with a rate just around replacement.) Empirical happiness research seems to validate this pessimism about parenting: All else equal, people with kids are indeed less happy than people without.

[…]

A closer look at the General Social Survey also reveals that child No. 1 does almost all the damage. Otherwise identical people with one child instead of none are 5.6 percentage points less likely to be very happy. Beyond that, additional children are almost a happiness free lunch. Each child after the first reduces your probability of being very happy by a mere .6 percentage points.

Happiness researchers also neglect a plausible competing measure of kids’ impact on parents’ lives: customer satisfaction. If you want to know whether consumers are getting a good deal, it’s worth asking, “If you had to do it over again, would you make the same decision?” The only high-quality study of parents’ satisfaction dates back to a nation-wide survey of about 1,400 parents by the Research Analysis Corp. in 1976, but its results were stark: When asked, “If you had it to do over again, would you or would you not have children?” 91% of parents said yes, and only 7% expressed buyer’s remorse.

You might think that everyone rationalizes whatever decision they happened to make, but a 2003 Gallup poll found that wasn’t true. When asked, “If you had to do it over again, how many children would you have, or would you not have any at all?” 24% of childless adults over the age of 40 wanted to be child-free the second time around, and only 5% more were undecided. While you could protest that childlessness isn’t always a choice, it’s also true that many pregnancies are unplanned. Bad luck should depress the customer satisfaction of both groups, but parenthood wins hands down.

The main problem with parenting pessimists, though, is that they assume there’s no acceptable way to make parenting less work and more fun. Parents may feel like their pressure, encouragement, money and time are all that stands between their kids and failure. But decades’ worth of twin and adoption research says the opposite: Parents have a lot more room to safely maneuver than they realize, because the long-run effects of parenting on children’s outcomes are much smaller than they look.

Think about everything parents want for their children. The traits most parents hope for show family resemblance: If you’re healthy, smart, happy, educated, rich, righteous or appreciative, the same tends to be true for your parents, siblings and children. Of course, it’s difficult to tell nature from nurture. To disentangle the two, researchers known as behavioral geneticists have focused on two kinds of families: those with twins, and those that adopt. If identical twins show a stronger resemblance than fraternal twins, the reason is probably nature. If adoptees show any resemblance to the families that raised them, the reason is probably nurture.

Parents try to instill healthy habits that last a lifetime. But the two best behavioral genetic studies of life expectancy—one of 6,000 Danish twins born between 1870 and 1900, the other of 9,000 Swedish twins born between 1886 and 1925—found zero effect of upbringing. Twin studies of height, weight and even teeth reach similar conclusions. This doesn’t mean that diet, exercise and tooth-brushing don’t matter—just that parental pressure to eat right, exercise and brush your teeth after meals fails to win children’s hearts and minds.

Parents also strive to turn their children into smart and happy adults, but behavioral geneticists find little or no evidence that their effort pays off. In research including hundreds of twins who were raised apart, identical twins turn out to be much more alike in intelligence and happiness than fraternal twins, but twins raised together are barely more alike than twins raised apart. In fact, pioneering research by University of Minnesota psychologist David Lykken found that twins raised apart were more alike in happiness than twins raised together. Maybe it’s just a fluke, but it suggests that growing up together inspires people to differentiate themselves; if he’s the happy one, I’ll be the malcontent.

David Mills at First Things:

“Many conclude that if you value your happiness and spending money, the only way to win the modern parenting game is not to play. Low fertility looks like a sign that we’ve finally grasped the winning strategy,” writes Bryan Caplan in The Breeder’s Cup, published in The Wall Street Journal‘s weekend edition. Readers will remember the widely promoted study of a few years ago declaring that having children made parents less happy or, depending on the writer, outright unhappy.

In yet another example of the mainline press picking up on what our own David Goldman had been saying for years in his Spengler columns (search “demography” and “population”) and in Demographics and Depression, Caplan argues that the studies we have show that this equation of limited families with the good life is wrong. After challenging the study I just mentioned, he writes:

Happiness researchers also neglect a plausible competing measure of kids’ impact on parents’ lives: customer satisfaction. If you want to know whether consumers are getting a good deal, it’s worth asking, “If you had to do it over again, would you make the same decision?”

The only high-quality study of parents’ satisfaction dates back to a nation-wide survey of about 1,400 parents by the Research Analysis Corp. in 1976, but its results were stark: When asked, “If you had it to do over again, would you or would you not have children?” 91% of parents said yes, and only 7% expressed buyer’s remorse.

You might think that everyone rationalizes whatever decision they happened to make, but a 2003 Gallup poll found that wasn’t true. When asked, “If you had to do it over again, how many children would you have, or would you not have any at all?” 24% of childless adults over the age of 40 wanted to be child-free the second time around, and only 5% more were undecided.

While you could protest that childlessness isn’t always a choice, it’s also true that many pregnancies are unplanned. Bad luck should depress the customer satisfaction of both groups, but parenthood wins hands down.

He goes on to argue that parents could make themselves happier, but his reason is a little uncomfortable: “the long-run effects of parenting on children’s outcomes are much smaller than they look.”

Matt Zeitlin:

For Caplan, you start with what economists think, then see what voters think and then chalk up the difference as evidence as irrationality. He, in accordance with this general faith in what economists think, proposes all sorts of reforms that would take decision making power out of the hands of the public and into the hands of economists, like giving the Council of Economic Advisors “the power to invalidate legislation as ‘uneconomical,’”  and giving college graduates an extra vote.

The problem for Caplan is that economists generally agree that voters are rational and that insomuch as voters are misinformed, it tends to cancel itself out. So, Caplan has to make a further argument for why we should trust economists on policy issues, but why we should ignore their collective judgment on whether or not voters are rational.

He seems to have pulled this trick again in regard to his arguments for why, despite the rather robust finding in happiness research that having kids decreases reported happiness, people should have lots of kids. And, to take this argument even further, that they should have kids for selfish reasons; they should do so for themselves. Now, he makes some arguments for why this research need not lead to non-fertile outcomes and why the stuff that leads to the negative happiness effects due to having kids isn’t all that useful or important, but we are still left with another case where Caplan is making a significant, contestable point that is at odds with what economists’ think about the issue.

I’m not saying that this is a bad thing in and of itself, but it sure puts Caplan in a weird position where he agrees with economists on everything except the stuff he devotes his time to researching and writing about.

Will Wilkinson at Megan McArdle’s place:

Bryan really struggles with the fact that children tend to have a negative effect on self-reported happiness. (Most economists are dismissive of survey evidence, but, to his credit, Bryan isn’t.) He tries to minimize the damage this finding does to his argument by pointing out that the negative effect is small for the first kid, and even smaller for additional kids. But it remains that if one is trying to maximize happiness, no kids appears to be the best bet and fewer is better than more.

Of course, self-reported happiness is just one dubiously reliable piece of evidence about the effect of kids on well-being. The trouble with Bryan’s strategy in the WSJ essay is that he resorts to even less reliable survey evidence to support his position. He cites polls that show that people tend not to report regrets about having had kids, but that a large majority of those who have not had kids say that would choose to have them if they “had it to do over again.” Now, Darwinian logic suggests that the belief that one would be better off without children will not tend to be widespread. That is, as Harvard psychologist Daniel Gilbert argues, we should expect to find conviction in the satisfactions of parenthood to be strong and all-but universal whether or not those convictions reflect the truth. So one would want check them against, say, the self-reported life satisfaction of those with and without children. Or, if one is inclined to think like an economist, one might say “talk is cheap” and check these beliefs against what people actually do.

In that case, what one finds is that increases in average levels of education, levels of disposable income, gender equality, and access to birth control — that is, increases in the ability of people (and especially women) to deliberately control the conditions of their own lives — generally lead people to choose a smaller rather than larger number of children. As far as I can tell, Bryan’s response is that it “lacks perspective” to take at face value this truly striking tendency of choice under conditions of increasing personal control. If Bryan really thinks rising education, wealth, and gender equality have somehow made us worse at evaluating the costs and benefits of children, he probably ought to turn in his economist card.

None of this is to say that there aren’t excellent reasons to have families larger than the relatively small rich-country norm. It’s just that these tend not to be the kinds of reasons economists consider “selfish.”

Razib Khan at Discover:

Being an economist he focuses on rational individual behavior, but I want to point to another issue: group norms. In the left-liberal progressive post-graduate educated circles which I come into contact with in the USA childlessness is not uncommon, and bears no stigma (on the contrary, I hear often of implicit and explicit pressure on graduate students to forgo children for the sake of maximizing labor hour input into research over one’s lifetime from advisors). On the other hand, the norm of a two-child family is also very strong, and going above replacement brings upon you a fair amount of attention. The rationale here is often environmental, more children = more of a carbon footprint. But my friend Gregory Cochran has stated that as an individual who is well above replacement whose social milieu is more conservative that he perceives that more than two children is also perceived as deviant in Middle American society. In other words, the reasoning may differ, but the intuition is the same (in Italy the reasoning mostly involves the cost of raising children from the perspective of parents, both in cash and time).

The numbers in the General Social Survey tell the tale. In 1972 42% of adults had more than 2 children. In 2008 32% did. More relevantly in 1972 47% of adults between the ages of 25 and 45 had more than 2 children. In 2008 the figure for that age group is 27% for those with more than 2 children.

Of course the numbers mix up a lot of different subcultures. One anecdote I’d like to relate is a conversation I had with a secular left-of-center university educated couple. They expressed the aspiration toward 4 children. I asked them out of curiosity about the population control issue, and they looked at me like I was joking. It needs to be mentioned that they weren’t American, rather they were from a Northern European country which seems on the exterior to resemble the United States very much. But it reminds us of the importance of group norms in shaping life choices and expectations, the implicit framework for our explicit choices.

All that goes to my point that Bryan Caplan’s project will be most effective among demographics geared toward prioritizing individual choice, analysis and utility maximization, as opposed to relying upon the wisdom of group norms. Economists, quantitative social science and finance types, libertarians, etc.

Andrew Leonard at Salon:

But that leads us to the truly deranged part of the argument: Caplan believes that we shouldn’t be working so hard to be good parents, because, hey, the quality of our parenting doesn’t really make any difference to how our kids turn out. He cites a few behavioral genetics studies, mostly on sets of twins, that purport to show very little difference in outcomes when children with the same genetic makeup are raised by different parents.

It’s the ultimate get-out-of-jail-free parenting card!

Many find behavioral genetics depressing, but it’s great news for parents and potential parents. If you think that your kids’ future rests in your hands, you’ll probably make many painful “investments” — and feel guilty that you didn’t do more. Once you realize that your kids’ future largely rests in their own hands, you can give yourself a guilt-free break.

If you enjoy reading with your children, wonderful. But if you skip the nightly book, you’re not stunting their intelligence, ruining their chances for college or dooming them to a dead-end job. The same goes for the other dilemmas that weigh on parents’ consciences. Watching television, playing sports, eating vegetables, living in the right neighborhood: Your choices have little effect on your kids’ development, so it’s OK to relax. In fact, relaxing is better for the whole family. Riding your kids “for their own good” rarely pays off, and it may hurt how your children feel about you.

So we should have more kids, and spend less time and effort parenting them, and just kick back and enjoy the fruits of our non-labors, presumably generated when our offspring stroke our egos by visiting us in our nursery homes and telling us how cool we were for setting no curfews and letting them play videogames until they keeled over in front of their computers from lack of proper hydration.

I guess I do see a certain libertarian world view integrity here. If you judge modes of political organization from the foundational precept that good government is impossible, then why not also assume that good parenting is, if not impossible, merely useless? If you’re going to dump John Maynard Keynes then why not throw out Dr. Spock as well?

Who knew that lazy permissiveness would become a calling card of libertarian parenting ideology? I’ll concede that there are tendencies towards over-parenting in American culture that verge on the extreme, and could quite possibly be counter-productive. The frantic competition to get your baby into the best pre-school in Manhattan — a struggle that seems to start before the child is even born — may not be the most efficient use of resources. Caplan is certainly right on one point, we should relax more — relaxed parents, I would submit, are better parents. But to leap from that starting point to the contention that our choices have little effect on our children’s development seems, in my own anecdotal understanding of the world, to go too far. Even worse, it smacks of an abdication of responsibility, a surrender to the worst kind of easy rationalization. Good parenting is hard, but even if the differences we are making are only perceivable at the margins, that shouldn’t absolve us from the necessity and pleasure of making any effort at all. It’s not a winning or losing strategy: It’s a way to be in the world.

Tony Woodlief at Megan McArdle’s place:

To be sure, there are too many parents who, despite their children, remain narcissistic nimrods. But the nature of parenting is to beat that out of you. There’s just no time to spend on ourselves, at least not like we would if we didn’t have babies to wash and toys to clean up, usually in the middle of the night, after impaling our feet on them.
People are inherently self-centered, and especially in a peaceful, prosperous society, this easily leads to self-indulgence that in turn can make us weak and ignoble. There’s something to be said for ordeals — like parenting, or marriage, or tending the weak and broken — which push us into an other-orientation. When we have to care for someone, we get better at, well, caring for people. It actually takes practice, after all. I’m still trying to get it right.
I suppose an economist could make this all fit. What I’m really saying, the economist might contend, is that one element of my self-interest, in addition to enjoying a leisurely meal, and plenty of sleep, and the ability to go away on vacations without worrying about who will watch the youngsters, is not becoming (remaining?) a jerk. Kids certainly don’t guarantee that won’t happen, but they help mitigate the risk. And if we conceptualize that self-interest, in turn, as happiness, we’re right back where we started.
But I wonder if the questions would change. Instead of asking parents and non-parents whether they are happy right now, we might ask whether they are becoming more like the people they want to be. And then we might see children not as factors that may or may not be contributing to our happiness, but as opportunities to practice what most of us — perhaps me most of all — need to do more often, which is to put someone else before ourselves.

James Poulos at Ricochet:

The unique thing about children is that, at one and the same time, we both share our identity with them and don’t. In some ways, there’s no one more deeply ‘identical’ than you and your child. But in other ways, of course — marvelously awesome and frustrating ways — there’s no one more deeply different, precisely because your kid’s differences with you are so intimately connected to your own differences with him or her. That’s the amazing foundation of an astonishing kind of relationship. There’s nothing like it. Not even friendship compares.

In our broader relations with at first undifferentiated ‘others’, it makes us happy to develop friendships. There’s something inherent, I think, in the connection between friendship and happiness. A happy society is one where lots and lots of people are friends with each other — where there are ‘thick webs of social trust,’ as an academic might say. And yes, a happy family is one where relations are of a kind we’d describe in popular shorthand as ‘friendly’…but that’s not quite it. That’s not the full story, is it?

Happiness might not be beside the point of life. But the stubborn persistence of family leads me to believe that oftentimes we humans want, maybe desperately, maybe in spite of ourselves, something more than happiness. If we ignore this in our political life, we’re going to wind up with a system of laws and a power structure that cuts against the grain of that powerful human longing. And the costs of that might be very high indeed.

James Joyner:

Moreover, I’d argue that the definitions of “happiness” at work here are dubious.

My 17-month-old woke up a few minutes ago and interrupted my writing.  She does that kind of thing a lot.   Indeed, pretty much every morning.   And when she does, I have to stop what I’m doing, usually at an inopportune time.   And that makes me unhappy!

Is this momentary inconvenience outweighed by the joy she brings me?  Of course.

But having kids means constant diversion from doing what you want to be doing at any given moment.  And having multiple children, I’m reliably told, tends to increase that phenomenon geometrically.    Indeed, parents the world over agree:  Kids are a giant pain in the ass!

Those of us who are reasonably intelligent and had children by conscious decision knew all this going in.   Indeed, one of the amusing things about impending first-time fatherhood is the number of people who dispense the advice “It’ll change your life!”   But that doesn’t make the sacrifices and trade-offs less real.

While I’m a social scientist by training, I’m not a sociologist, much less steeped in the literature in question here.   But I don’t know that it’s possible to develop measures to quantify the thousands of instances of “unhappiness” that come from the annoyances of parenthood and the less frequent but far more potent joys.   And I certainly don’t think it’s possible to do it in a way that satisfies an economist’s notion of “happiness.”

UPDATE: Jennifer Senior in New York Magazine

Ezra Klein

1 Comment

Filed under Families

BeFri 4 Never?

Hilary Stout at NYT:

Most children naturally seek close friends. In a survey of nearly 3,000 Americans ages 8 to 24 conducted last year by Harris Interactive, 94 percent said they had at least one close friend. But the classic best-friend bond — the two special pals who share secrets and exploits, who gravitate to each other on the playground and who head out the door together every day after school — signals potential trouble for school officials intent on discouraging anything that hints of exclusivity, in part because of concerns about cliques and bullying.

“I think it is kids’ preference to pair up and have that one best friend. As adults — teachers and counselors — we try to encourage them not to do that,” said Christine Laycob, director of counseling at Mary Institute and St. Louis Country Day School in St. Louis. “We try to talk to kids and work with them to get them to have big groups of friends and not be so possessive about friends.”

“Parents sometimes say Johnny needs that one special friend,” she continued. “We say he doesn’t need a best friend.”

That attitude is a blunt manifestation of a mind-set that has led adults to become ever more involved in children’s social lives in recent years. The days when children roamed the neighborhood and played with whomever they wanted to until the streetlights came on disappeared long ago, replaced by the scheduled play date. While in the past a social slight in backyard games rarely came to teachers’ attention the next day, today an upsetting text message from one middle school student to another is often forwarded to school administrators, who frequently feel compelled to intervene in the relationship. (Ms. Laycob was speaking in an interview after spending much of the previous day dealing with a “really awful” text message one girl had sent another.) Indeed, much of the effort to encourage children to be friends with everyone is meant to head off bullying and other extreme consequences of social exclusion.

Elizabeth Scalia at The Anchoress:

Unreal. Read the article. The schools and “experts” are intrusive and unnatural. And sad.

This isn’t about what’s good for the children; it is about being better able to control adults by stripping from them any training in intimacy and interpersonal trust. Don’t let two people get together and separate themselves from the pack, or they might do something subversive, like…think differently.

This move against “best friends” is ultimately about preventing individuals from nurturing and expanding their individuality. It is about training our future adults to be unable to exist outside of the pack, the collective. The schools want you to think this is about potential bullying and the sadness of some children feeling “excluded.” But that is not what this is about.

As a kid I was the target of “the pack;” I know more than I care to about schoolyard bullies, and I can tell you that the best antidote to them was having a good friend. One good friend who shares your interests and ideas and sense of humor can erase the negative effects of the conform-or-die “pack” with which one cannot identify, “the pack” that cannot comprehend why one would not wish to join them and will not tolerate resistance.

Marc Thiessen at The American Enterprise Institute:

The absurdity of this approach is beyond measure. For one thing, it is completely at odds with real life. When kids grow up, they’re not going to be “friends with everyone.” In the real world there are people who will like you, and people who will dislike you; people who are kind, and people who are cruel; people you can trust, and people you can’t trust; people who will be there for you in good times and bad, and people who will abandon you when the going gets tough.

Childhood is when kids learn to recognize those different types of people, experience joys and disappointments of different kinds of friendships, and learn the social skills they will need to develop mature relationships later in life. As one psychologist quoted in the article puts it, “No one can teach you what a great friend is, what a fair-weather friend is, what a treacherous and betraying friend is except to have a great friend, a fair-weather friend or a treacherous and betraying friend.”

Denying kids the opportunity to have such experiences stunts their development. It also teaches kids to develop superficial relationships with lots of people, without learning how to develop deep bonds of meaning and consequence with anyone. Think about it: Who among us would tell their deepest, darkest secrets to “everyone”? Denying kids a “best friend” makes it harder to get through childhood—and makes it harder to be a successful adult one day as well.

Obviously, schools want to discourage cliques, ensure that no children are ostracized or bullied, and help those along who have trouble bonding with their peers. But the solution to such problems is not to discourage kids who do bond with their peers from doing so—or consciously separate them when they do.

This is but the latest misguided effort to protect children from the realities of life that only harms them in the long run. First came the trend to stop keeping score in childhood sports and give everyone a “participation trophy”—discouraging excellence and achievement, and shielding kids from the reality of winning and losing. Now comes a new fad of separating best friends—denying kids the magic of those first special friendships.

Jonah Goldberg at The Corner:

The stories are so familiar it makes no need to go into specifics. The experts of the helping professions want to tell you what to eat, what to drink, how to drive, how to talk, how to think. Sometimes they have a point, and as the father of a young child, I’m perfectly willing to concede that cliques and whatnot can be unhealthy or mean. But this really goes to 11.

Lisa Solod Warren at Huffington Post:

I was bulled in middle school and I have written a seminal article on school bullying for Brain, Child magazine a few years ago (well before the topic became so hot) and I say: Balderdash. Bullying is a problem; it can even be a tragedy. But the fact that a couple of kids bond as best friends is not the cause of bullying: stopping best friendships is not going to be the “cure.”

I have always counted myself fortunate to have a best friend as well as a couple of other women in my life with whom I am extremely close. I met my oldest best friend, Patti, when I was eight years old. Now, 46 years later, separated by hundreds of miles, we can still pick up the phone and start a conversation right in the middle. She knows my past and I know hers: all the dirty bits, the secrets, the moments we might not want to remember. She came to my father’s funeral a few months ago and I know that whatever I asked, whenever I asked it, she would be there. She knows the same of me.

She’s been there for me through a whole host of life changes. And those life changes began soon after we met in third grade. Had anyone discouraged me from clinging to her, or her to me, there would indeed have been hell to pay. And to what end? Is there any kind of scientific evidence that proves that being friends with an entire group of people without having one special person on whom one can absolutely rely is preferable? I wonder, actually, why on earth anyone would study this sort of thing in the first place. Bullying is about power. Power and insecurity. It’s something I found is often “taught” or handed down from generation to generation. Stopping kids from having one great friend whom they can trust to have their back is not going to prevent bullying. If anything, when a child doesn’t have someone he or she can trust -someone outside the family–bullying can seem even more onerous and scary than it already is. I never told my parents I was bullied. But Patti knew. And she defended me.

Razib Khan at Secular Right:

The article is in The New York Times. It’s a paper which usually tries really hard to pretend toward objective distance, but I get the sense that even the author of the piece was a bit confused by the weirdness which had infected the educational establishment.

Rod Dreher:

What crackpots. The idea that the way to decrease bullying is to deny children the opportunity to make a special friend or friends is cruel and crazy. It’s like saying that the way to stop school gun violence is to prevent anything that even looks like a gun from being brought to school — like, say, little toy soldiers pinned to a hat. No teacher or school would object to that. Oh, wait…

Leave a comment

Filed under Education, Families

So Easy A Caveman Can Do It

Science Magazine:

The morphological features typical of Neandertals first appear in the European fossil record about 400,000 years ago (13). Progressively more distinctive Neandertal forms subsequently evolved until Neandertals disappeared from the fossil record about 30,000 years ago (4). During the later part of their history, Neandertals lived in Europe and Western Asia as far east as Southern Siberia (5) and as far south as the Middle East. During that time, Neandertals presumably came into contact with anatomically modern humans in the Middle East from at least 80,000 years ago (6, 7) and subsequently in Europe and Asia.

Neandertals are the sister group of all present-day humans. Thus, comparisons of the human genome to the genomes of Neandertals and apes allow features that set fully anatomically modern humans apart from other hominin forms to be identified. In particular, a Neandertal genome sequence provides a catalog of changes that have become fixed or have risen to high frequency in modern humans during the last few hundred thousand years and should be informative for identifying genes affected by positive selection since humans diverged from Neandertals.

Substantial controversy surrounds the question of whether Neandertals interbred with anatomically modern humans. Morphological features of present-day humans and early anatomically modern human fossils have been interpreted as evidence both for (8, 9) and against (10, 11) genetic exchange between Neandertals and the presumed ancestors of present-day Europeans. Similarly, analysis of DNA sequence data from present-day humans has been interpreted as evidence both for (12, 13) and against (14) a genetic contribution by Neandertals to present-day humans. The only part of the genome that has been examined from multiple Neandertals, the mitochondrial DNA (mtDNA) genome, consistently falls outside the variation found in present-day humans and thus provides no evidence for interbreeding (1519). However, this observation does not preclude some amount of interbreeding (14, 19) or the possibility that Neandertals contributed other parts of their genomes to present-day humans (16). In contrast, the nuclear genome is composed of tens of thousands of recombining, and hence independently evolving, DNA segments that provide an opportunity to obtain a clearer picture of the relationship between Neandertals and present-day humans.

A challenge in detecting signals of gene flow between Neandertals and modern human ancestors is that the two groups share common ancestors within the last 500,000 years, which is no deeper than the nuclear DNA sequence variation within present-day humans. Thus, even if no gene flow occurred, in many segments of the genome, Neandertals are expected to be more closely related to some present-day humans than they are to each other (20). However, if Neandertals are, on average across many independent regions of the genome, more closely related to present-day humans in certain parts of the world than in others, this would strongly suggest that Neandertals exchanged parts of their genome with the ancestors of these groups.

Several features of DNA extracted from Late Pleistocene remains make its study challenging. The DNA is invariably degraded to a small average size of less than 200 base pairs (bp) (21, 22), it is chemically modified (21, 2326), and extracts almost always contain only small amounts of endogenous DNA but large amounts of DNA from microbial organisms that colonized the specimens after death. Over the past 20 years, methods for ancient DNA retrieval have been developed (21, 22), largely based on the polymerase chain reaction (PCR) (27). In the case of the nuclear genome of Neandertals, four short gene sequences have been determined by PCR: fragments of the MC1R gene involved in skin pigmentation (28), a segment of the FOXP2 gene involved in speech and language (29), parts of the ABO blood group locus (30), and a taste receptor gene (31). However, although PCR of ancient DNA can be multiplexed (32), it does not allow the retrieval of a large proportion of the genome of an organism.

The development of high-throughput DNA sequencing technologies (33, 34) allows large-scale, genome-wide sequencing of random pieces of DNA extracted from ancient specimens (3537) and has recently made it feasible to sequence genomes from late Pleistocene species (38). However, because a large proportion of the DNA present in most fossils is of microbial origin, comparison to genome sequences of closely related organisms is necessary to identify the DNA molecules that derive from the organism under study (39). In the case of Neandertals, the finished human genome sequence and the chimpanzee genome offer the opportunity to identify Neandertal DNA sequences (39, 40).

A special challenge in analyzing DNA sequences from the Neandertal nuclear genome is that most DNA fragments in a Neandertal are expected to be identical to present-day humans (41). Thus, contamination of the experiments with DNA from present-day humans may be mistaken for endogenous DNA. We first applied high-throughput sequencing to Neandertal specimens from Vindija Cave in Croatia (40, 42), a site from which cave bear remains yielded some of the first nuclear DNA sequences from the late Pleistocene in 1999 (43). Close to one million bp of nuclear DNA sequences from one bone were directly determined by high-throughput sequencing on the 454 platform (40), whereas DNA fragments from another extract from the same bone were cloned in a plasmid vector and used to sequence ~65,000 bp (42). These experiments, while demonstrating the feasibility of generating a Neandertal genome sequence, were preliminary in that they involved the transfer of DNA extracts prepared in a clean-room environment to conventional laboratories for processing and sequencing, creating an opportunity for contamination by present-day human DNA. Further analysis of the larger of these data sets (40) showed that it was contaminated with modern human DNA (44) to an extent of 11 to 40% (41). We employed a number of technical improvements, including the attachment of tagged sequence adaptors in the clean-room environment (23), to minimize the risk of contamination and determine about 4 billion bp from the Neandertal genome.

Eliza Strickland at Discover:

Researchers from Germany’s Max Planck Institute for Evolutionary Anthropology first sequenced the entire Neanderthal genome from powdered bone fragments found in Europe and dating from 40,000 years ago–a marvelous accomplishment in itself. Then, they compared the Neanderthal genome to that of five modern humans, including Africans, Europeans, and Asians. The researchers found that between 1 percent and 4 percent of the DNA in modern Europeans and Asians was inherited from Neanderthals, which suggests that the interbreeding took place after the first groups of humans left Africa.

Anthropologists have long speculated that early humans may have mated with Neanderthals, but the latest study provides the strongest evidence so far, suggesting that such encounters took place around 60,000 years ago in the Fertile Crescent region of the Middle East [The Guardian].

The study, published in Science and made available to the public for free, opens up new areas for research. Geneticists will now probe the function of the Neanderthal genes that humans have hung on to, and can also look for human genes that may have given us a competitive edge over Neanderthals.

Erik Trinkaus, an anthropologist at Washington University in St. Louis, who has long argued that Neanderthals contributed to the human genome, welcomed the study, commenting that now researchers “can get on to other things than who was having sex with who in the Pleistocene”

Science Blog:

Neanderthals lived in much of Europe and western Asia before dying out 30,000 years ago. They coexisted with humans in Europe for thousands of years, and fossil evidence led some scientists to speculate that interbreeding may have occurred there. But the Neanderthal DNA signal shows up not only in the genomes of Europeans, but also in people from East Asia and Papua New Guinea, where Neanderthals never lived.

“The scenario is not what most people had envisioned,” Green said. “We found the genetic signal of Neanderthals in all the non-African genomes, meaning that the admixture occurred early on, probably in the Middle East, and is shared with all descendants of the early humans who migrated out of Africa.”

The study did not address the functional significance of the finding that between 1 and 4 percent of the genomes of non-Africans is derived from Neanderthals. But Green said there is no evidence that anything genetically important came over from Neanderthals. “The signal is sparsely distributed across the genome, just a ‘bread crumbs’ clue of what happened in the past,” he said. “If there was something that conferred a fitness advantage, we probably would have found it already by comparing human genomes.”

The draft sequence of the Neanderthal genome is composed of more than 3 billion nucleotides–the “letters” of the genetic code (A, C, T, and G) that are strung together in DNA. The sequence was derived from DNA extracted from three Neanderthal bones found in the Vindiga Cave in Croatia; smaller amounts of sequence data were also obtained from three bones from other sites. Two of the Vindiga bones could be dated by carbon-dating of collagen and were found to be about 38,000 and 44,000 years old.

Deriving a genome sequence–representing the genetic code on all of an organism’s chromosomes–from such ancient DNA is a remarkable technological feat. The Neanderthal bones were not well preserved, and more than 95 percent of the DNA extracted from them came from bacteria and other organisms that had colonized the bone. The DNA itself was degraded into small fragments and had been chemically modified in many places.

Carl Zimmer at Discover:

Ideas about our own kinship to Neanderthals have swung dramatically over the years. For many decades after their initial discovery, paleoanthropologists only found Neanderthal bones in Europe. Many researchers decided, like Schaafhausen, that Neanderthals were the ancestors of living Europeans. But they were also part of a much larger lineage of humans that spanned the Old World. Their peculiar features, like the heavy brow, were just a local variation. Over the past million years, the linked populations of humans in Africa, Europe, and Asia all evolved together into modern humans.

In the 1980s, a different view emerged. All living humans could trace their ancestry to a small population in Africa perhaps 150,000 years ago. They spread out across all of Africa, and then moved into Europe and Asia about 50,000 years ago. If they encountered other hominins in their way, such as the Neanderthals, they did not interbreed. Eventually, only our own species, the African-originating Homo sapiens, was left.

The evidence scientists marshalled for this “Out of Africa” view of human evolution took the form of both fossils and genes. The stocky, heavy browed Neanderthals did not evolve smoothly into slender, flat-faced Europeans, scientists argued. Instead, modern-looking Europeans just popped up about 40,000 years ago. What’s more, they argued, those modern-looking Europeans resembled older humans from Africa.

At the time, geneticists were learning how to sequence genes and compare different versions of the same genes among individuals. Some of the first genes that scientists sequenced were in the mitochondria, little blobs in our cells that generate energy. Mitochondria also carry DNA, and they have the added attraction of being passed down only from mothers to their children. The mitochondrial DNA of Europeans was much closer to that of Asians than either was to Africans. What’s more, the diversity of mitochondrial DNA among Africans was huge compared to the rest of the world. These sorts of results suggested that living humans shared a common ancestor in Africa. And the amount of mutations in each branch of the human tree suggested that that common ancestor lived about 150,000 years ago, not a million years ago.

Over the past 30 years, scientists have battled over which of these views–multi-regionalism versus Out of Africa–is right. And along the way, they’ve also developed more complex variations that fall in between the two extremes. Some have suggested, for example, that modern humans emerged out of Africa in a series of waves. Some have suggested that modern humans and other hominins interbred, leaving us with a mix of genetic material.

Reconstructing this history is important for many reasons, not the least of which is that scientists can use it to plot out the rise of the human mind. If Neanderthals could make their own jewelry 50,000 years ago, for example, they might well have had brains capable of recognizing themselves as both individuals and as members of a group. Humans are the only living animals with that package of cognitive skills. Perhaps that package had already evolved in the common ancestor of humans and Neanderthals. Or perhaps it evolved independently in both lineages.

Razib Khan at Discover

John Hawks:

If you had to sum up in a few words, what does this mean for paleoanthropology?

These scientists have given an immense gift to humanity.

I’ve been comparing it to the pictures of Earth that came back from Apollo 8. The Neandertal genome gives us a picture of ourselves, from the outside looking in. We can see, and now learn about, the essential genetic changes that make us human — the things that made our emergence as a global species possible.

And in doing so, they’ve taken a forgotten group of people — whom even most anthropologists had given up on — and they’ve restored them to their rightful place in our heritage.

Beyond that, they’ve taken all of their data and deposited it in a public database, so that the rest of us can inspect them, replicate results, and learn new things from them. High school kids can download this stuff and do science fair projects on Neandertal genomics.

This is what anthropology ought to be.

What did they sequence?

The Max Planck group obtained most of their genomic sequence from three specimens from Vindija — Vi33.16, Vi33.25, and Vi33.26. These are all postcranial fragments with minimal anatomical information. Green and colleagues were able to establish that the three bones represent different women, and that Vi33.16 and Vi33.26 may represent maternal relatives.

From these skeletons they got 5.3 billion bases of sequence. All this from an amount of bone powder about equal in mass to an aspirin pill.

Amazing. I mean, I know the folks at Max Planck are reading this. It’s inspiring to see what they’ve been able to do. These are three pieces of barely diagnostic hominin bone, and they’ve obtained literally hundreds of times more information than we have ever gotten from the fossil record of Neandertals.

I’ll describe the analyses of genetic similarity with humans in more detail below. As a brief summary, of those positions where the human genome differs from chimpanzees, Neandertals have the chimpanzee version around 12.7 percent of the time — meaning that across the genome, a Neandertal and a human will share a genetic ancestor an average of around 800,000 years ago. This is a couple hundred thousand years higher than the same number if we compare two humans to each other. The higher age of genetic common ancestors reflects partial isolation between the Neandertal population and the African populations that gave rise to most of our current genetic variation.

The team were able to identify 111 candidate duplications, almost all of which have some evidence of copy number variation in humans or other primates. They tentatively show that Neandertals have a bit more copy number variation than present-day humans, and identify a few loci with substantially higher copy numbers in one group or the other.

Jules Crittenden:

I recall in my own nathro days in college that was more of an open question … species or subpopulation, than it seems to have been more recently. Hawks holds puts us biologically in the same column. They are Homo sapiens, though he allows some paleontologists will disagree, based on morphological distinctions. But that, he notes, would make all non-Africans interspecies hybrids.

I kind of like the interspecies hybrid idea. It’s got an edgy sound to it. But Hawks also suggests that the 1 to 4 percent is only the currently discernable proportion. It could be higher, and sub-Saharan Africans could have some as yet undetected because it shows no variation from what we all have, part of the baseline, so to speak. I especially like his observation that genetically speaking, a minimum 1 percent Neanderthal genes in 5 billion people is the equivalent of 50 million Neanderthals “yawping from the rooftops,” which he suggests is not a bad genetic success rate for the Neanderthals. They weren’t evolutionary failures after all. Propagation of genetic material being the ultimate point of all our endeavors. Absent as yet is any indication of what discernable traits we might have inherited from them.

A lot of mind-bending aspects to this news. I’m good with it. I accepted being descended from slithering primodial bog sludge a long time ago. I mean one of those moments when you consider that, yeah, we have some successful molluscs and pretty nasty lizards to thank that we’re here, enjoying the good stuff. They lived and breathed, or whatever, just like us. More recent descent from thickset hairy low brows, that’s neither such a big surprise nor anything to sneer at. A delight that the mysterious, exotic strain known only through science and maybe vague mythic memory turns out, as Hawks says, not to have been a complete deadend. They live in us.

In fact, I was deliriously happy driving home tonight, thinking about all that grand and terrible prehistory. It’s not like anything’s changed with this news. It’s like a few years ago, when I was able to identify a location I had wondered about. The forgotten village in Kent where my father’s people … our Y chromosome … dwelt for centuries, down to the pub where they lived and poured beer, and the names of about 10 generations of them. It was like figuring out there are Saxons, Vikings, Celts and Picts up the line, and the ones who built Stonehenge, learning a little about who and what those direct barbarian forebears were. None of this past is that far in the past, after all. It all happened yesterday. And whenyou start to zero in on them, it’s a homecoming feeling: “There you are. I knew it.”

Ronald Bailey at Reason:

I will mention that my 23andMe genotype scan indicates my maternal haplogroup is U5a2a which arose some 40,000 years ago and were among the first homo sapiens colonizers of ice age Europe.

If you’re interested, go here for my column on what rights Neanderthals might claim should we ever succeed in using cloning technologies to bring them back.

Leave a comment

Filed under History, Science

New Atheists: The New Coke Of Intellectual Combatants?

David Bentley Hart in First Things:

I think I am very close to concluding that this whole “New Atheism” movement is only a passing fad—not the cultural watershed its purveyors imagine it to be, but simply one of those occasional and inexplicable marketing vogues that inevitably go the way of pet rocks, disco, prime-time soaps, and The Bridges of Madison County. This is not because I necessarily think the current “marketplace of ideas” particularly good at sorting out wise arguments from foolish. But the latest trend in à la mode godlessness, it seems to me, has by now proved itself to be so intellectually and morally trivial that it has to be classified as just a form of light entertainment, and popular culture always tires of its diversions sooner or later and moves on to other, equally ephemeral toys.

[…]

The principal source of my melancholy, however, is my firm conviction that today’s most obstreperous infidels lack the courage, moral intelligence, and thoughtfulness of their forefathers in faithlessness. What I find chiefly offensive about them is not that they are skeptics or atheists; rather, it is that they are not skeptics at all and have purchased their atheism cheaply, with the sort of boorish arrogance that might make a man believe himself a great strategist because his tanks overwhelmed a town of unarmed peasants, or a great lover because he can afford the price of admission to a brothel. So long as one can choose one’s conquests in advance, taking always the paths of least resistance, one can always imagine oneself a Napoleon or a Casanova (and even better: the one without a Waterloo, the other without the clap).

But how long can any soul delight in victories of that sort? And how long should we waste our time with the sheer banality of the New Atheists—with, that is, their childishly Manichean view of history, their lack of any tragic sense, their indifference to the cultural contingency of moral “truths,” their wanton incuriosity, their vague babblings about “religion” in the abstract, and their absurd optimism regarding the future they long for?

I am not—honestly, I am not—simply being dismissive here. The utter inconsequentiality of contemporary atheism is a social and spiritual catastrophe. Something splendid and irreplaceable has taken leave of our culture—some great moral and intellectual capacity that once inspired the more heroic expressions of belief and unbelief alike. Skepticism and atheism are, at least in their highest manifestations, noble, precious, and even necessary traditions, and even the most fervent of believers should acknowledge that both are often inspired by a profound moral alarm at evil and suffering, at the corruption of religious institutions, at psychological terrorism, at injustices either prompted or abetted by religious doctrines, at arid dogmatisms and inane fideisms, and at worldly power wielded in the name of otherworldly goods. In the best kinds
of unbelief, there is something of the moral grandeur of the prophets—a deep and admirable abhorrence of those vicious idolatries that enslave minds and justify our worst cruelties.

But a true skeptic is also someone who understands that an attitude of critical suspicion is quite different from the glib abandonment of one vision of absolute truth for another—say, fundamentalist Christianity for fundamentalist materialism or something vaguely and inaccurately called “humanism.” Hume, for instance, never traded one dogmatism for another, or one facile certitude for another. He understood how radical were the implications of the skepticism he recommended, and how they struck at the foundations not only of unthinking faith, but of proud rationality as well.

A truly profound atheist is someone who has taken the trouble to understand, in its most sophisticated forms, the belief he or she rejects, and to understand the consequences of that rejection. Among the New Atheists, there is no one of whom this can be said, and the movement as a whole has yet to produce a single book or essay that is anything more than an insipidly doctrinaire and appallingly ignorant diatribe.

If that seems a harsh judgment, I can only say that I have arrived at it honestly. In the course of writing a book published just this last year, I dutifully acquainted myself not only with all the recent New Atheist bestsellers, but also with a whole constellation of other texts in the same line, and I did so, I believe, without prejudice. No matter how patiently I read, though, and no matter how Herculean the efforts I made at sympathy, I simply could not find many intellectually serious arguments in their pages, and I came finally to believe that their authors were not much concerned to make any.

What I did take away from the experience was a fairly good sense of the real scope and ambition of the New Atheist project. I came to realize that the whole enterprise, when purged of its hugely preponderant alloy of sanctimonious bombast, is reducible to only a handful of arguments, most of which consist in simple category mistakes or the kind of historical oversimplifications that are either demonstrably false or irrelevantly true. And arguments of that sort are easily dismissed, if one is hardy enough to go on pointing out the obvious with sufficient indefatigability.

The only points at which the New Atheists seem to invite any serious intellectual engagement are those at which they try to demonstrate that all the traditional metaphysical arguments for the reality of God fail. At least, this should be their most powerful line of critique, and no doubt would be if any of them could demonstrate a respectable understanding of those traditional metaphysical arguments, as well as an ability to refute them. Curiously enough, however, not even the trained philosophers among them seem able to do this. And this is, as far as I can tell, as much a result of indolence as of philosophical ineptitude. The insouciance with which, for instance, Daniel Dennett tends to approach such matters is so torpid as to verge on the reptilian. He scarcely bothers even to get the traditional “theistic” arguments right, and the few ripostes he ventures are often the ones most easily discredited.

As a rule, the New Atheists’ concept of God is simply that of some very immense and powerful being among other beings, who serves as the first cause of all other things only in the sense that he is prior to and larger than all other causes. That is, the New Atheists are concerned with the sort of God believed in by seventeenth- and eighteenth-century Deists. Dawkins, for instance, even cites with approval the old village atheist’s cavil that omniscience and omnipotence are incompatible because a God who infallibly foresaw the future would be impotent to change it—as though Christians, Jews, Muslims, Hindus, Sikhs, and so forth understood God simply as some temporal being of interminable duration who knows things as we do, as external objects of cognition, mediated to him under the conditions of space and time.

Thus, the New Atheists’ favorite argument turns out to be just a version of the old argument from infinite regress: If you try to explain the existence of the universe by asserting God created it, you have solved nothing because then you are obliged to say where God came from, and so on ad infinitum, one turtle after another, all the way down. This is a line of attack with a long pedigree, admittedly. John Stuart Mill learned it at his father’s knee. Bertrand Russell thought it more than sufficient to put paid to the whole God issue once and for all. Dennett thinks it as unanswerable today as when Hume first advanced it—although, as a professed admirer of Hume, he might have noticed that Hume quite explicitly treats it as a formidable objection only to the God of Deism, not to the God of “traditional metaphysics.” In truth, though, there could hardly be a weaker argument. To use a feeble analogy, it is rather like asserting that it is inadequate to say that light is the cause of illumination because one is then obliged to say what it is that illuminates the light, and so on ad infinitum.

Ross Douthat:

Given the durability and predictability of the arguments involved, and the amount of ink spilled on them over the years (and centuries, and millennia), it’s hard to come up with something interesting to say on the question of Christianity versus the “new” atheists. But the Orthodox theologian David Bentley Hart has now managed the trick twice: Once in his slim book “Atheist Delusions: The Christian Revolution and Its Fashionable Enemies,” which came out last year, and now in a fine essay for the latest First Things. Here’s his concluding reflection — but do read the whole thing:

If I were to choose from among the New Atheists a single figure who to my mind epitomizes the spiritual chasm that separates Nietzsche’s unbelief from theirs, I think it would be the philosopher and essayist A.C. Grayling … Couched at one juncture among [his] various arguments (all of which are pretty poor), there is something resembling a cogent point. Among the defenses of Christianity an apologist might adduce, says Grayling, would be a purely aesthetic cultural argument: But for Christianity, there would be no Renaissance art—no Annunciations or Madonnas—and would we not all be much the poorer if that were so? But, in fact, no, counters Grayling; we might rather profit from a far greater number of canvasses devoted to the lovely mythical themes of classical antiquity, and only a macabre sensibility could fail to see that “an Aphrodite emerging from the Paphian foam is an infinitely more life-enhancing image than a Deposition from the Cross.” Here Grayling almost achieves a Nietzschean moment of moral clarity.

Ignoring that leaden and almost perfectly ductile phrase “life-enhancing,” I, too—red of blood and rude of health—would have to say I generally prefer the sight of nubile beauty to that of a murdered man’s shattered corpse. The question of whether Grayling might be accused of a certain deficiency of tragic sense can be deferred here. But perhaps he would have done well, in choosing this comparison, to have reflected on the sheer strangeness, and the significance, of the historical and cultural changes that made it possible in the first place for the death of a common man at the hands of a duly appointed legal authority to become the captivating center of an entire civilization’s moral and aesthetic contemplations—and for the deaths of all common men and women perhaps to be invested thereby with a gravity that the ancient order would never have accorded them.

Here, displayed with an altogether elegant incomprehensibility in Grayling’s casual juxtaposition of the sea-born goddess and the crucified God (who is a crucified man), one catches a glimpse of the enigma of the Christian event, which Nietzsche understood and Grayling does not: the lightning bolt that broke from the cloudless sky of pagan antiquity, the long revolution that overturned the hierarchies of heaven and earth alike. One does not have to believe any of it, of course—the Christian story, its moral claims, its metaphysical systems, and so forth. But anyone who chooses to lament that event should also be willing, first, to see this image of the God-man, broken at the foot of the cross, for what it is, in the full mystery of its historical contingency, spiritual pathos, and moral novelty: that tender agony of the soul that finds the glory of God in the most abject and defeated of human forms. Only if one has succeeded in doing this can it be of any significance if one still, then, elects to turn away.

Rod Dreher:

You really should read the whole thing, especially Hart’s conclusion. Essentially he respects Nietzsche’s atheism a very great deal, though obviously he opposes it, because Hart sees that Nietzsche understands precisely what repudiating Christianity means.

Kevin Drum:

So: do the New Atheists recycle old arguments? Of course they do. But that’s not because they’re illiterate, it’s because those arguments have never been convincingly answered. All the recondite language in the world doesn’t change that, either, because the paradoxes are inherent in the ideas themselves. In the end, the English language probably just isn’t up to the task of answering them, no matter how hard you try to twist it. To say that God is is best understood as an absolute plenitude of actuality doesn’t really advance the ball so much as it merely tries to hide it.

Later in the essay, perhaps recognizing that he’s exhausted the semantic possibilities here, Hart redirects his focus to the cultural impact of Christianity, suggesting that the New Atheists haven’t truly grappled with what a world without religion would be like. And perhaps they haven’t. But interior passions and social mores work both ways. Did Isaac Newton feel a deeper aesthetic connection with the infinite when he was inventing calculus or when he was absorbed in Christian mysticism? Who can say? Not me, surely, and not Hart either. Likewise, the question of whether Christianity has, on balance, been a force for moral good is only slightly more tractable. Does keeping the servants from stealing the silver really outweigh the depredations of the Crusades and the Inquisition?

But no matter how beguiling those questions are, surely the metaphysical one always comes first. To say merely that Christianity is comforting or practical — assuming you believe that — is hardly enough. You need to show that it’s true. And if you want to assert that something is true, the onus is on you to demonstrate it, not on the New Atheists to demonstrate conclusively that it isn’t. After all, in the end the only difference between Hart and Dawkins is that Hart believes in 1% of the world’s religions and Dawkins believes in 0% of them. It’s Dawkins’ job only to question that remaining 1%. It’s Hart’s job to answer him.

Andrew Sullivan:

Look: human nature being what it is, most religious people will be a dreadful example of the best version of faith you can find. Drum permits what Hitch’s book was: a grand guignol of anti-clerical, fish-barrel-shooting. It’s easy; it’s way fun; mockery of inarticulate believers has made my friend, Bill Maher, lotsa money. But it’s largely missing the real intellectual task by fighting a straw man, rather than a real and living and intelligent faith. Part of that is the fault of believers. We’ve done a lousy job of delineating a living faith for modernity.

UPDATE: Damon Linker at TNR

Kevin Drum

UPDATE #2: Sullivan responds to Drum

Drum responds to Sullivan

Sullivan responds to Drum

UPDATE #3: Kevin Drum

Joe Carter at First Things

Rod Dreher

UPDATE #4: Razib Khan at Secular Right on Carter

1 Comment

Filed under Religion

She Works Hard For The Money, So Hard For The Money

Gabriel Sherman at New York Magazine:

Palin knew there were ways to solve her money problems, and then some. Planning quickly got under way for a book. And just weeks after the campaign ended, reality-show producer Mark Burnett called Palin personally and pitched her on starring in her own show. Then, in May 2009, she signed a $7 million book deal with HarperCollins. Two former Palin-campaign aides—Jason Recher and Doug McMarlin—were hired to plan a book tour with all the trappings of a national political campaign. But there was a hitch: With Alaska’s strict ethics rules, Palin worried that her day job would get in the way. In March, she petitioned the Alaska attorney general’s office, which responded with a lengthy list of conditions. “There was no way she could go on a book tour while being governor” is how one member of her Alaska staff put it.

On Friday morning, July 3, Palin called her cameraman to her house in Wasilla and asked him to be on hand to record a prepared speech. Around noon, in front of a throng of national reporters, she announced that she was stepping down as governor. To many, it seemed a mysterious move, defying the logic of a potential presidential candidate, and possibly reflecting some hidden scandal—but in fact the choice may have been as easy as balancing a checkbook.

Less than a year later, Sarah Palin is a singular national industry. She didn’t invent her new role out of whole cloth. Other politicians have cashed out, used the revolving door, doing well in business after doing good in public service. Entertainment figures like Arnold Schwarzenegger, Jesse Ventura, and even Ronald Reagan have worked the opposite angle, leveraging their celebrity to make their way in politics. And family dramas have been a staple of politics from the Kennedys—or the Tudors—on down. But no one else has rolled politics and entertainment into the same scintillating, infuriating, spectacularly lucrative package the way Palin has or marketed herself over multiple platforms with the sophistication and sheer ambitiousness that Palin has shown, all while maintaining a viable presence as a prospective presidential candidate in 2012.

The numbers are staggering. Over the past year, Palin has amassed a $12 million fortune and shows no sign of slowing down. Her memoir has so far sold more than 2.2 million copies, and Palin is planning a second book with HarperCollins. This January, she signed a three-year contributor deal with Fox News worth $1 million a year, according to people familiar with the deal. In March, Palin and Burnett sold her cable show to TLC for a reported $1 million per episode, of which Palin is said to take in about $250,000 for each of the eight installments.

David Kurtz at Talking Points Memo:

But what’s more intriguing than that raw number is the underlying dynamic here: the mutual business relationship between Palin and the East Coast elites whom she rails against with populist invective and who scorn her as dumber than a moose. Money can soften any edge.

David Weigel:

Gabriel Sherman’s sprawling New York magazine cover story on “Palin, Inc.” is actually a fast and breezy read. It being an article about Sarah Palin, there’s no policy to slow it down. We get a brief explanation of how bitter Palin was serving as governor of Alaska while journalist Kaylene Johnson got rich (“I can’t believe that woman is making so much money off my name,” said Palin), especially after Palin realized that her gubernatorial duties would complicate her national book tour. So she quit, and we’re off.

Read it all, but take note of these points.

– According to Sherman, Palin writes her own Facebook posts. That shouldn’t be news, but Palin hired a ghostwriter to finish “Going Rogue”– and some of her early posts, festooned with footnotes, don’t sound like her. According to Sherman, said ghostwriter considered suing over an article by Max Blumenthal that made hay of her collaboration with conservative reporter Robert Stacy McCain.

– Discovery Communications bought Palin’s TV show as the “centerpiece of a strategy that TLC executives see as positioning the network as the anti-Bravo, whose shows like Top Chef, the Real Housewives franchise, and America’s Next Top Model are programmed to a liberal urban audience.” Bodes poorly for boycotters.

Robert Stacy McCain:

A friend wonders why I said nothing about this part of Sherman’s story:

The only real blip concerned her ghostwriter, Lynn Vincent, a writer for the Evangelical World magazine, whom Palin chose from a short list of candidates presented to her by HarperCollins. After news of Vincent’s selection leaked, critics seized on a January 2009 pro-life piece she had written for World titled “Black Genocide” — as well as her association with the co-writer of her 2006 book Donkey Cons, former Washington Times writer Robert Stacy McCain (no relation), who had a history of racially charged statements and associations — to claim that Vincent was racist. Vincent, who had collaborated on a New York Times best seller about racial reconciliation, told me that she was deeply hurt by the racism allegation and considered suing the Daily Beast for a piece by writer Max Blumenthal headlined “Palin’s Noxious Ghostwriter.” But when the media shifted its focus to Palin’s next adventure, Vincent dropped the lawsuit idea.

The problem with suing for libel (and as a journalist, I thank God for this) is that under the Sullivan precedent, it’s almost impossible for a “public figure” to win a libel suit. Like politicians and entertainers, an author is more or less automatically a public figure, thus requiring proof of actual malice. And as opposed to, say, a false accusation of criminal behavior, the charge of “racism” is damnably hard to disprove, which is why it is slung around so frequently in political discourse.

So there was no percentage in Lynn suing the Daily Beast, besides which going to court over what was clearly a third-hand guilt-by-association smear wouldn’t help Palin — and helping Palin was what Lynn was hired to do, after all.

And shame on those people who keep spreading malicious rumors that Max Blumenthal was arrested in a raid on a so-called “ladyboy” brothel in Phuket!

Josh Green at The Atlantic:

The article is chock full of Palin porn: her speaking fee ($100,000 a pop, plus diva treatment); her preferred mode of travel (Lear jet); her next headache (Levi Johnston is “writing” a book about her); and, my favorite detail, her three-level, 6000-square-foot, no doubt tastefully decorated new home that was already under construction when Gabe paid a visit. Among other things, the article makes clear that the desire for money, not an imminent scandal, led Palin to quit her governorship.

This all has significant political implications that tend to be downplayed or ignored when discussing Sarah Palin. Toward the end of the piece, Gabe goes right to the heart of the matter:

Why Palin would want to trade the presidency [of right-wing America]–and the salary–for a candidacy that faces possibly insurmountable political hurdles is a question to ponder.
Why indeed? Palin’s prospects in the Republican Party are a good deal dimmer than her star wattage suggests. She’s tallied middling performances in early straw polls and shows no inclination to embark on the grassroots work required of a presidential candidate. More to the point, this article makes clear that, were there any doubt, her preoccupying concern is “building her brand”–less in a political sense than a financial one. Palin may yet make a bid for the White House. But all evidence suggests that when the time comes to choose between earning money and running for president, Palin will choose money.

And she’s hardly alone. The other surprise figure to emerge from the 2008 race, with almost as bright a political future as Palin, was Mike Huckabee. But he, too, is earning serious coin on the book, TV, and lecture circuit, and signaling that he won’t run again. The candidate running the hardest for the White House, Mitt Romney, is also the only one who has secured a fortune. There seems to be developing an inverse correlation between the difficulty of running for president and the easy life that awaits those who fall just short. It’s never been harder to grab the brass ring; and it’s never been easier to quit trying.

Andrew Sullivan on Green:

The political parties are weaker than they once were. The elites cannot control grass-roots Internet-driven phenomena. Look at Obama. He seems a natural president now, but Washington dismissed his chances – as they are now dismissing Palin’s – right up to the Iowa caucuses. And because Palin is such a terrifying – truly terrifying – prospect for the US and the world, I think such complacency, rooted in cynicism about Palin’s mercenary nature, is far too reckless.

Look: what we have seen this past year is the collapse of the RNC as it once was and the emergence of a highly lucrative media-ideological-industrial complex. This complex has no interest in traditional journalistic vetting, skepticism, scrutiny of those in power, or asking the tough questions. It has no interest in governing a country. It has an interest in promoting personalities and ideologies and false images of a past America that both flatter and engage its audience. For most in this business, this is about money. Roger Ailes, who runs a news business, has been frank about what his fundamental criterion is for broadcasting: ratings not truth. Obviously all media has an eye on the bottom line – but in most news organizations, there is also an ethical editorial concern to get things right. I see no such inclination in Fox News or the hugely popular talkshow demagogues (Limbaugh, Levin, Beck et al.), which now effectively control the GOP. And when huge media organizations have no interest in any facts that cannot be deployed for a specific message, they are a political party in themselves.

Add Palin to the mix and you have a whole new machine in American politics – one with the capacity, as much as Obama’s, to upend the established order. Beltway types roll their eyes. But she’s not Obama, they say. She doesn’t know anything, polarizes too many people, has lied constantly and still may have dozens of skeletons in her unvetted closets.

To which the answer must be: where the fuck have you been this past year?

It doesn’t matter whether she’s uneducated, unprincipled, unaware and unscrupulous. The more she’s proven incapable of the presidency, the more her supporters believe she is destined for it. It’s a brilliant little gig she’s devised. She may be ignorant, but she is not stupid. She has the smarts of all accomplished pathological liars and phonies. And this time, she will not even bother to go on any television outlets other than Fox News. She will be the first presidential nominee never to have had a press conference. She will give statements by Facebook. She will speak directly to the cocoon that is, at least, twenty percent of Americans. The press, already a rank failure in exposing her fraudulence, will be so starstruck by the chance to make money that we will never have a Couric-style interview again. it will be Oprah all the time. Because Palin lives in an imaginary world, the entire media world will be required to echo it or be shut out.

Green responds to Sullivan:

Well, I think Andrew is profoundly wrong and borderline nuts on this subject–and if he’s right, and Palin launches a bid for the White House, his nightmare of a Palin presidency is unlikely to be realized. It’s not impossible. Just unlikely. The point of my original post, riffing off this New York magazine piece on Palin’s newfound wealth, was that Palin seems more interested in money than politics. The conventional wisdom in Washington–which Andrew has backward–is that Palin will probably run, though this is less a matter of conviction than a vague sense that she craves the spotlight and won’t pass it up. My mildly contrarian suggestion was that avarice might lead her instead to become a Glenn Beck-like political-entertainment figure, which would furnish her with a platform, a lifestyle, and a way of avoiding the hard work of running for president (a lot tougher than serving a half term as governor).

My point was limited to Palin’s own motivations and desires. But Andrew’s rant doesn’t address that–I don’t think his worldview allows for the possibility that she might not run. He concerns himself instead with lots of black-helicopter sounding stuff about cynical elites and the “media-ideological-industrial complex” and basically stops just short of accusing Palin of fluoridating the water. But after all that, what Andrew has described is not a force powerful enough to elect a president. He’s described (pretty accurately, I might add) elite Washington’s view of the Fox News viewership and then imbued it with a lot more importance than it merits. “Add Palin to the mix,” he writes, “and you have a whole new machine in American politics–one with the capacity, as much as Obama’s, to upend the established order.”

No, you don’t. As Andrew himself points out, the established order of the GOP has already been upended–you wouldn’t have a goofball like Michael Steele as your party chairman if the grownups were still in charge!

DiA at The Economist:

Mr Green is right; she is building a brand. But just so she can be a television hostess? How long would that brand shine if she rebuffed those who will (with very real passion) beg her to run? Yes, she’s uniquely successful at infuriating or terrifying liberals—but that’s because they think that she might still just become president. How does that 2013 contract look when she’s refused to enter the fight? This is hunch-blogging at its most speculative, I confess, but I think she’s in. So over to you. I don’t see someone who’s preparing for a book-writing and lecture-circuit career. What do you see in the estimable Sarah Palin?

Razib Khan at Secular Right:

The profile reduces my probability that Palin will make a serious run (as opposed to a pro forma one) for the highest office in 2012.* It also leaves me impressed by how quickly and efficiently she’s leveraged her celebrity and gone from moderately upper middle class** in income (and in serious debt due to legal bills after the 2008 campaign) to wealthy. Some Republicans are apparently worried about her becoming the “face of the party,” something that crops up now and then in the media, but it doesn’t seem like they really have to worry that much unless the party has no real substance and is rooted only in style and the need to get elected. As for Sarah Palin, whatever you think of her politics or personality, she’s offering a concrete product distributed through the private sector. The article mentions that her book was a major reason that Random House generated a profit last year! Whatever criticisms one might lodge, she’s not getting rich by being a rent-seeker, as so many of our public and private sector elites have become. In fact the article points to a whole industry of liberal critique which has emerged around her, so she’s not even capturing all the wealth that she’s responsible for (spillover effects).

Leave a comment

Filed under Political Figures

Mittens And The Brain

First, Karl Rove’s new book:

Daniel Foster at The Corner:

Karl “The Architect” Rove came by the NR offices this afternoon to talk about his new book Courage and Consequence. The conversation spanned from Social Security reform and Medicare Part D to Iraq and the Surge — all topics on which Mr. Rove’s nimble command of even the finest-grained political and policy details helped frame in light of current political battles.

On the domestic politics surrounding the invasion of Iraq, Rove said he made a “critical mistake” in late 2003 by not squarely confronting what he saw as a calculated and coordinated effort by national Democrats to suggest that President Bush had willfully lied in making his case for war.

“I think they polled it and focus-grouped it,” Rove said, noting that, within days of one another, a half-dozen prominent Congressional Democrats had made public comments suggesting the president lied. But Rove said the campaign was intellectually inconsistent.

“You had Ted Kennedy, for one, voting against the authorization of force and then two days later going to Georgetown and saying Saddam Hussein had weapons of mass destruction,” Rove said.

“If Bush was lying, so were the 60-plus Democrats who said on the floor of Congress that Saddam had WMD,” he observed.

Rove acknowledged that “we weren’t winning the war for a long time,” but said President Bush was “ahead of his commanders” by 2006, both in realizing that he needed to change course, and in expressing interest in the counterinsurgency strategy of General Petraeus.

On the decision to push the troop surge, “Bush said there are two ways for the military to break, either by over-use or by losing a war, and he said it was more dangerous to lose a war.”

Asked if the administration should have replaced Secretary of Defense Donald Rumsfeld sooner, Rove said they began to “quietly find out our other options,” but that it would have been a mistake to “pull Rumsfeld in the highly politicized environment” leading up to the 2006 midterm elections, a move that would have created messy confirmation hearings.

Rove also talked extensively about the Bush administration’s domestic-policy agenda, especially Social Security Reform and Medicare Part D.

Paul Begala at The Daily Beast:

Rove is witty and smart. He likes hunting and loves Texas. If it weren’t for lying us into a war and leading us into a depression, I might even be pals with Rove. And so I opened his book without the level of hostility most of my fellow Democrats might.

At first, he exceeded my expectations for candor as he wrote about his personal life. Your heart aches for him when you read about the breakup of his parents’ marriage, the disorientation he must have felt when an aunt and uncle casually told him he was adopted and thus the man he thought was his father was no biological relation. His account of his first wife leaving him is unflinching and admirably non-judgmental: “She then looked at me and blurted, ‘I don’t love you. I have never loved you. I never will love you.'” Ouch.

He brings the same unblinking style to the topic of his mother’s suicide: “Like her mother before her in 1974, my mother had dealt with life’s punishing blows by attempting suicide. But unlike my grandmother, Mom succeeded. I was stunned when I got the news but at some deep level I had always known she was capable of this. My mother struggled, even in placid waters, to keep a grip on life.”

Not everyone can confront their family’s failings with such frankness. But when the topic switches from the personal to the political, Rove admits no weakness or mistakes. It turns out (spoiler alert!) that the George W. Bush of Mr. Rove’s tale is strong and brave and wise and kind. He is a man—well, that’s unfair, a god, really, or at least a demigod—possessed of valor and vigor, poise and pluck, humor and humility. His description of his first meeting with the future president sounds like something out of Tiger Beat: “George W. Bush walked through the front door, exuding more charm and charisma than is allowed by law. He had on his Air National Guard jacket, jeans, and boots.” This passage works best if, while you’re reading it, you listen to Donny Osmond sing “Puppy Love.”

One wonders if the admiration was reciprocated. Doubtful. President Bush repaid Rove’s Cavalier King Charles Spaniel-like loyalty by bestowing a nickname on him. No, not “Bush’s Brain” as the press called him—nor something cool like “M-Kat”, Bush’s name for the ever-fashionable media man, Mark McKinnon.

Turd Blossom.

Matt Latimer at The Daily Beast:

I sat next to him while he shouted on the phone with some poor soul in Idaho over the then-unfolding Larry Craig scandal. As we landed in Nevada, he pointed out, somewhat wistfully, where he grew up. When the president and First Lady gave him a surprise farewell party, complete with red velvet cake, he surprised everyone with his visible emotion. Then, when Bush came into the airplane’s conference room to question the necessity of an upcoming political event, Rove flatly refused to hear him out. “Never give an inch,” he muttered as the president walked off.

That mantra, of course, was the secret of his remarkable success and the root of his ultimate undoing. An effective advocate when things were going his way—such as rallying support for the invasion of Iraq—he proved needlessly divisive when things went wrong. He, and Bush, suggested that conservatives who opposed his immigration proposals were xenophobes, racists, fools, or cowards, earning lasting enmity in the process. He supported big-government conservatism that alienated many in the base, some of whom joined the tea party movement. He failed to articulate a conservative vision in favor of short-term tactics and maneuvers. “They were determined to run a base mobilization, narrow margin victory,” former Speaker Newt Gingrich recently charged, “largely because they were SO uncomfortable with ideas.” The result was one election in which we lost the popular vote, another when Republicans barely defeated liberal John Kerry, and two disastrous elections in 2006 and 2008. President Bush left office with a 22 percent approval rating and the GOP, as Jed Babbin, the editor of the conservative newspaper Human Events once put it, was left “a smoking hole in the ground.” In short, Rove’s approach left the GOP about as popular as the dress Sarah Jessica Parker wore to the Oscars.

And yet Rove still doesn’t seem to have figured it out. He advised Senator Kay Bailey Hutchison to wage last week’s losing campaign against the sitting Republican governor of Texas—wounding both officials and the Texas GOP in the process—to score points in his ongoing feud with Governor Perry. The worst-kept secret in Washington is that his associates are behind many of the anonymous Republican attacks on the current chairman of the Republican National Committee, attacks which by complete coincidence of course always seem to make Rove and his allies come out in a better light. And though he is a useful, sometimes brilliant commentator on Fox News, one hopes that he and his compatriots are not trying to run the network as they ran the White House, by urging bookers to keep disfavored people off the airwaves. One suspects Roger Ailes would not put up with that.

One day soon perhaps Rove, with his love of history, will learn the lesson of the former president he says he reveres. Ronald Reagan kept a sign on his desk that said, “There is no limit to what a man can do or where he can go if he doesn’t mind who gets the credit.” Reagan, at least, didn’t believe in his own greatness as much as he believed in the greatness of the ideas that he stood for.

Ed Morrissey:

Karl Rove’s long-awaited memoir of his White House career, Courage and Consequence hits the bookshelves on Tuesday. Rove has quite a rollout planned for it. He’ll have a Ustream launch at noon ET, which I’ll embed earlier in the morning. After that, Rove will join me on The Ed Morrissey Show to discuss the book, following Andrew Malcolm’s appearance, which begins at 3 pm ET.

It’s already generating some of the histrionics and nastiness we saw from the media during the Bush administration. Dana Milbank today lets his wit run, or rather crawl:

As a White House reporter during the Bush presidency, I often worried that I wasn’t getting the whole story. Now, Karl Rove has finally given it to me.

His new book, “Courage and Consequence,” promises to “pull back the curtain on my journey to the White House and my years there.” What he divulges nearly made me choke on a pretzel.

That business about President George W. Bush misleading the nation about Iraq? Didn’t happen. “Did Bush lie us into war? Absolutely not,” Rove writes.

Condoning torture? Wrong! “The president never authorized torture. He did just the opposite.”

Foot-dragging on global warming? Au contraire. “He was aggressive and smart on this front.”

I’ve written dozens if not hundreds of blog posts refuting these claims, but we’ll save that for Rove on Tuesday. (Getting bad intel is not the same as lying, Democrats made the same WMD claims from 1998 forward, waterboarding as performed by the CIA is arguably not torture and Congress didn’t object to it as such at the time, and Bush reduced carbon emissions in the US more than Europe did.) Meanwhile, Hot Air readers can get a jump on sales by placing orders now!

John Hinderaker at Powerline:

I’ve just started the book today, but it’s a fascinating and substantial work. It is well written and copiously annotated; not a casually tossed-off memoir, but a book intended as a serious historical document. The chapters on Rove’s youth are touching, and his discussions of campaign strategy are candid and illuminating. I’m looking forward to asking Rove some questions I’ve wondered about for a long time, like: whose idea was it to retract the “16 words,” a decision that began the downfall of the Bush administration? Tune in on Saturday to learn the answer. In the meantime, anyone who wants to understand politics in our time should read Rove’s book.

David Weigel at The Washington Independent:

Rove’s pride and tunnel vision about his campaign tactics aren’t anything new in the Washington memoir genre. Much of Sarah Palin’s “Going Rogue” featured the same sort of finger-pointing about her brief bid for the vice presidency. If anything, Rove takes more obvious relish in attacking the people who made his campaigns difficult — it’s mostly “the kooky left-wing blogosphere” that thinks he ran a dirty campaign against John McCain in 2000, or that only an “imbecile” could have believed the 2004 exit polls that showed a Kerry-Edwards win, and so on.

But unlike Palin — unlike most people with his portfolio — Rove was in the cockpit for much of a consequential presidency that launched two wars and dramatically expanded the size of the federal government. He writes about this the same way he writes about minor tiffs and campaign tricks. He spends a page trying to debunk the idea that Bush ever told Americans to “go shopping” after the September 11 attacks. Technically, he’s right. The closest Bush ever came to using those two precise words — the moment that most people remember as the “go shopping” moment — were his September 27, 2001 remarks at Chicago’s O’Hare Airport when he urged Americans to “get down to Disney World in Florida” and “take your families and enjoy life, the way we want it to be enjoyed.” But Rove insists that the “closest he ever came” was a different speech in which Bush praised Americans for “going about their daily lives, working and shopping and playing, worshiping at churches and synagogues and mosques, going to movies and to baseball.” Even there, Rove skips past the argument made by critics — that Bush, in a unique position to demand more of Americans, gave an “all-clear” sign and moved on. In writing about Hurricane Katrina, one of his only regrets is “flying over the region in Air Force One on Wednesday, rather than landing.” In one of Rove’s few admissions, he admits that he’s “one of the people responsible for this mistake.”

“Courage and Consequence” is filled with such arguments. Pre-release excepts about Rove’s take on the Iraq War — that his biggest regret was that he should have worked harder to spin the fallout over the lack of WMD in Iraq — foreshadowed the way Rove would tackle most of the controversies of his tenure. At several points, he simply misstates facts. He impugns the character of former U.S. Attorney David Iglesias, who was removed from his position in New Mexico after not pursuing politicized prosecutions, by claiming that Iglesias was incompetent and gunning for electoral office. Paragraphs later, he claims that the only qualm that Democrats have with former U.S. Attorney Tim Griffin — who resigned after negative attention on his own politicized appointment — is that they feared it would help Griffin’s career. Left unmentioned is the real Democratic argument, that Griffin helped the Bush-Cheney campaign challenge the voter registrations of voters in largely African-American, Democratic-leaning areas. But to Rove, the most important Republican political strategist of his generation, Democratic worries about election integrity are basically one big joke. In an unsurprising chapter about the 2000 presidential election recount — revelations are limited to the angry looks and sighs that various players gave to Rove — he refers to the Bush team in Florida as “freedom fighters whose homeland had been occupied as they grappled with a blitzkrieg of lawsuits filed by Gore’s attorneys and street protests led by Jesse Jackson.”

Very little of this should surprise observers of Rove in power or out of power, as a quotable White House aide and then as a Fox News pundit who has reliably attacked the Democrats. Rove’s disinterest in policy or consequences of policy isn’t surprising, either. (”I didn’t pretend to be Carl von Clausewitz or Henry Kissinger, but I knew the Iraq War wasn’t going well,” Rove writes of his thinking in December 2006.) The historical value of the book itself is minimal. It functions, instead, as a test of whether Rove’s combination of pique and pride will be helpful as Bush administration veterans argue that they spent eight years changing America for the better, over the cries of critics, only to watch their work be ruined by Barack Obama and his pack of elitist liberals.

Noah Kristula-Green at FrumForum:

Earlier today, Karl Rove participated in an online chat session to answer questions about his new book.  Viewers were able to tweet questions for Rove to respond to.  The chat was fascinating to watch for two reasons. First, it actually gave an impression of what Karl Rove might be like as a real person, and second, because it validated how online media can be more constructive and interesting then a cable TV interviewer in an echo chamber.

The setting was not glamorous, but that may have helped the authenticity of the event. The lighting was terrible and Rove was not wearing stage make-up.

When Rove was asked what it was like to work on Fox News, he replied that “For every seven minutes that I’m on television, I have to do an hour of prep work.” Yet here he was, for an entire hour, answering questions with little prep work at all. Rove had no way to know what sort of questions he would get from the thousands of followers on Twitter.

Rove seemed fairly relaxed, and took questions on a wide range of topics, including some that were not very serious. One questioner asked Rove what reality show he would most want to be on. Rove admitted that while he was not very aware of the reality TV scene that “I would like to visit one of those ‘real wives of Orange County’ sets, to see if they are real people.” He also noted that the Sci-Fi channel was his favorite source of entertainment, but he didn’t say which shows he watched.

Although some questions were trivial, the strength of the format was that the questions were not part of a predefined topic. This allowed Rove to answer questions that may normally not get asked in the Fox News echo-chamber. When asked straight up “What has Obama done right?” Rove did not miss a beat before praising Obama’s military decisions regarding Iraq and Afghanistan, as well the reauthorization of the Patriot Act and strengthening No Child Left Behind. Rove stated: “We ought to look for things he does right, and support him.”

It’s highly unlikely that Rove would have ever been asked this question on a cable news show. Even if he had, it’s not hard to imagine a left-leaning site (such as the Huffington Post or Media Matters) grabbing the clip, embedding it, and then placing it under the headline (naturally, in all-caps): “WATCH: ROVE PRAISES OBAMA!” This would have left out how Rove then went on to attack Obama’s healthcare plan. When Rove is just chatting with followers on Twitter, there is less attention on him, and he was probably freed up to give more honest answers.

More Morrissey

Kathryn Jean Lopez’s interview with Rove

Spencer Ackerman:

Check this insane idea Rove pursued in advance of the post-2006-election firing of Donald Rumsfeld:

That summer, I looked into whether FedEx CEO Fred Smith, Bush’s original choice for the post in 1999, was now available. He wasn’t.

There but for the grace of God! They went to a FedEx CEO before Robert Gates. I suppose on the other hand he would’ve been better than Rumsfeld… Funny bit: Rove says that getting rid of Rumsfeld — which, of course, the Bush administration ultimately did — would’ve “damaged the military’s faith in Bush as commander in chief.” Actually, you know what really did damage the military’s faith in Bush as commander in chief? Retaining Donald Rumsfeld in the face of failure after failure after failure.

Marc Ambinder:

Mark Halperin and ABC’s The Note helped to build the Rove mythology. We called him “SMIP” — the Smartest Man In Politics. And he was: a walking rolodex and encyclopedia, expostulating about political history and able to drill down deep inside Congressional districts. At one White House meeting with him, he asked why the Poland Springs water bottle he had handed me (yes, I carried Karl Rove’s water, hah hah) was so special.  No idea. He proceeded to give me a political history of the company. He courted reporters, knowing whom to respond to and whom to ignore (he never once responded to my e-mails — kr@who.eop.gov didn’t reply), and he had a very well developed sense about the biases and structure of the traditional media.  A serious appraisal of Rove’s political work can be found here.

He was a brilliant campaign strategist. His singular achievement, I think, was in the way he rendered the George W. Bush persona he helped craft as (a) the heir to the Republican throne, the inevitable nominee, and (b) acceptable to evangelicals AND Catholics. It was always an open question about whether Rove himself was religious or not. Many detractors today point to Terry Nelson or Ken Mehlman or Karen Hughes as the real forces of genius behind the Bush political brand, but it was Rove who knew someone everyone, who was plugged in, who used his intergovernmental affairs portfolio to harness the Bush campaign machine to government. Rove had little to do with the national security policies and consequential decisions about Iraq that enemies suspected, but he designed and implemented the successful strategy that played upon Americans’ fear of terrorism to portray the Democratic Party as feckless. (The Dems were feckless — about standing up to Rove.) And Rove knew how to recruit candidates, he knew how to scare (some) members of Congress. He was an enforcer of discipline. And of loyalty: there are many GOP operatives today who owe Rove their thanks for their careers.

I will read his book, and I’m sure I’ll learn much from it. I bet it will be better than critics might think — more personal, certainly.   But for me, it will be less than it might once have been.

And now on to Mitt Romney’s new book

David Frum has a multitude of blog posts on the book. Here’s the list at FrumForm. Frum:

But here are the final thoughts as one puts it down:

No Apology is the work of a highly intelligent, very well informed man with a proven record of successful executive leadership. Romney was much disliked by the other Republican candidates in 2008, but as a pro-McCain friend joked to me: “I have to admit – Mitt Romney would make the greatest Secretary of Transportation ever.

What kind of president would he be?

Peggy Noonan once wrote of the first President Bush that he saw it as his job to sit behind a big desk and wait for important decisions to be brought to him to be made wisely and well.

Romney has some of that Bush spirit, topped up with an additional measure of technocratic expertise.

Yet it’s never been enough for a president to be a very smart guy who is good at running things. America has lots of smart guys who are good at running things. Why this smart guy of all the possible smart guys?

That’s the question that remains unanswered at the end of No Apology – and maybe the core weakness of the Romney political campaign.

Spencer Ackerman at The Washington Independent:

Romney’s central contention is that there are four “strategies” for global power: the United States’ blend of benevolent, market-based hegemony; the Chinese model of political autocracy and unrestrained industry; Russia’s energy-based path to resurgence; and the “violent jihadists,” an agglutination of scary Muslims. Trouble in paradise, according to Romney, comes from President Obama’s “presupposition” that “America is in a state of inevitable decline.” As a result, Romney must warn the nation to continue to lead the world, lest one or more of these competitors overtake America. “[T]here can be no rational denial of the reality that America is a decidedly good nation,” writes Romney, or perhaps a third grader. “Therefore, it is good for America to be strong.”

So many things are wrong with Romney’s view of an imperiled America that it is difficult to know where to begin. First, the idea that the U.S. is locked in a struggle for global supremacy with “violent jihadists” overlooks the exponential differences in economic resources, military strength, and global appeal between America and an increasingly imperiled band of Waziristan-based acolytes of Osama bin Laden. Al-Qaeda can attack us; it cannot displace the U.S. as a global leader. It manufactures nothing, trades with no one, and has absolutely nothing to offer anyone except like-minded conspiratorial murderers. In order to disguise these glaring asymmetries, Romney has to use an empty term — “the jihadists” — which he cannot rigorously define and with which he means to absorb the vastly different aims and ambitions of rival terrorist groups and separate nations like Iran.

“Violent jihadist groups come in many stripes across a spectrum,” Romney writes, “from Hamas to Hezbollah, from the Muslim Brotherhood to al-Qaeda.” But al-Qaeda exists because it considered the Muslim Brotherhood in Egypt too accommodating of the Egyptian government; Hamas has literally fought al-Qaeda attempts at penetrating the Gaza Strip; and Sunni al-Qaeda released a videotape just this weekend that derides “Rejectionist Shiite Hezbollah.” There is absolutely nothing that unites these organizations in any programmatic manner except Romney’s ignorance, and the expansion of ignorance is insufficient to topple an American superpower.

Daniel Larison:

Ackerman also draws attention to Romney’s bizarre view on how to conduct U.S. diplomacy, which seems to boil down to having one diplomatic attache for each regional command around the world. Ackerman writes:

Such an individual would “encourage people and politicians to adopt and abide by the principles of liberal democracy,” something that “would be ideal if other allied nations created similar regional positions, and if we coordinated our efforts with theirs.” That’s it for diplomacy, and he doesn’t have an agenda for global development. Why the world will simply do what America says simply because America says it is something Romney never bothers to consider. High school students at model U.N. conferences have proposed less ludicrous ideas.

Then again, those high school students have probably given the subject more thought. That is what I find most inexplicable about Romney’s decision to spend any time at all trying to fill in gaps in his record on foreign policy that he and everyone else know are there. He seems to think that making enough of the conventional noises on the right issues will persuade doubters and fence-sitters that he really does know what he’s talking about. As a political matter, this is folly. Bush was and remained famously clueless and incurious on foreign policy, but during the 2000 campaign he did not waste time trying to match Gore on national security and foreign policy credentials. He covered his glaring weaknesses by playing to the strengths that he did have. Romney seems to be intent on doing the opposite.

Ackerman also notes that the war in Afghanistan receives no mention in the book. As Romney still cannot make up his mind whether Obama has handled Afghanistan well or poorly, it is no surprise that he has not yet figured out how to demonize Obama for doing something that was promised and which Romney would normally support.

Kathryn Jean Lopez at The Corner:

If you had any doubts about who he is, you’re seeing the real thing now. Watching Mitt Romney on the No Apology tour thus far, he’s talking about what he wants to talk about, what moves him: being a Mr. Fix-It businessman — on the economy, on diplomacy, on health care. He wants to do this because he believes America is great and should and can continue to be. He appreciates — in a firsthand and in a practical, sociological way — that families are the building block of a great country, and he sees how good policies help them. And that’s what he wants to talk about.

And if a social issue hits his desk — based on his Massachusetts record — he’s going to do what he can to preserve families and life. (And that, by the way, makes a huge difference. We don’t, for instance, have such a person in the White House right now. And it can have a chilling effect: in executive orders, in the courts, on staffing, in health care, etc.) No matter if doesn’t happen to be what gets him up in the morning — stuff like the opportunity to talk about D.C. gay marriage, for instance.

Speaking of his Massachusetts record: It seems clear that he is not going to apologize for trying to tackle the health-care problem there. Their final plan was clearly imperfect, but it’s more right than what Washington is doing now. He’ll be stubborn in defense of it because governors tackling health-care reform — with the input of the likes of the Heritage Foundation, by the way — is to be encouraged.

And so, on Letterman last night, you didn’t see pizazz or stand-up. You heard dorky jokes — the rapper on the plane broke my hair — and a serious guy. That’s who he is. His CPAC speech this year and his book reflect that. He’s uncomfortable changing his emphases to fit Iowa or anywhere else, and he doesn’t pull it off convincingly when he tries it. If he runs again, don’t expect him to.

Allah Pundit:

Granted, it won’t sell remotely as well as Palin’s book did, but for a guy who sometimes seems lost in the shuffle of outsized conservative personalities, it’s a nice prize.

Romney’s book tour has, so far, attracted pretty large crowds, serving — along with the book sales — to reassure his supporters that, though he may not draw Sarah Palin style hordes, he’s a figure of genuine popular interest. He reportedly attracted more than 1,000 people to a book signing in Naples, Fla. last night.

That’s the good news for Romney fans. The bad news is that Mitt 2.0 is starting to sound like Mitt 1.0 again, which is also surprising since he appeared to have learned his lesson lately by not flip-flopping on RomneyCare in interviews. Click the image below to watch the clip from this morning’s Imus of Mitt claiming he’s never really called himself pro-choice.

[…]

I honestly think the perception of opportunism is a bigger liability to him than RomneyCare, which will, one way or another, be off most people’s radar screens come late 2011. And the worst part is that his record on this subject is so well known to conservatives that there’s no point in being weaselly anymore; just own up to your prior record, say you’ve changed your mind, and let it lie. Fudging the facts only gives people an excuse to make it an issue again.

I’ve always liked him personally, but between stuff like this and “true conservatives” hammering him for endorsing McCain, I get the feeling that he’s being set up as the Charlie Crist of the Republican presidential primary. Although if that leads him to accuse Huckabee of waxing his back, it’ll all be worth it.

Robert Costa at National Review:

Romney does not mean to scare his readers with No Apology, and the book’s tone is far from polemical. But he does intend to be frank: “As long as there are people out there, politicians in particular, that say ‘no worries, no problems, all we have to do is adjust the taxes a little bit and things will get better,’ then I think people are not getting the straight story.”

[…]

The most notable aspect of No Apology is how, for its first third, the book functions as a rumination on the nature of American power. Romney does not see international relations as a web of competing nation-states seeking a balance, but as a competition between four models of geopolitical order — the American model of freedom and democracy, the authoritarian and commerce-heavy Chinese model, the Russian authoritarian energy-based model, and the violent-jihadist model. To win, he writes, America must “be wary and vigilant,” because “by mid-century, out grandchildren may well view Russia with the same concern which we and our parents once did.”

[…]

While Romney is an avowed supporter of military power, he also spends time in No Apology advocating “soft power.” President Obama, he says, has misunderstood that term’s meaning.

“The greatest shortcoming between our ability and our performance in foreign policy comes in our exercise of soft power,” Romney says. “Our inability to sway and influence affairs in the world without military might has been disappointing over the past year. It is extraordinary to me that we have not been able to dissuade Iran, for instance, from its foolish course. Or North Korea, a nation that is puny in its capabilities, from their course. It just underscores our inability to effectively use diplomacy, the sway of our economic vitality, our cultural advantages — we’re just underperforming in those areas. If we were to organize our effort as effectively in the diplomatic sphere as we do in the private sector, we’d have a lot bigger impact.”

While working on his chapters about foreign policy, Romney found that objective measures of power were hard to come by. So, he developed his own, calling it the “Index of Leading Indicators.” He is the first to say that his model is “easy to criticize,” but hopes that his 14-point outline on everything from GDP levels and tax levels to health-care costs and national-security preparedness is a move toward providing some sort of “corrective” for future leaders trying to make sense of America’s place in the world.

“I really wanted to be able to go back 25 years and calculate for each one of the indices, to see what they said then and see what they said today,” Romney says. “To be honest, I found it beyond my capacity as a writer to get all that data. It was really hard to try and go back 25, 50 years and pull out that data. But we can certainly collect it now. If others have other points they’d like to add to the data index, great, but I think it’s a worthwhile exercise to try and actually track the progress that we’re making in preserving our values and shoring up the foundation of our national strength.”

Shawn Healy at Huffington Post:

Romney also writes about education policy and laments the relative decline in America’s competitiveness, embracing standardized testing, merit pay, mechanisms to remove incompetent educators, charter schools, school choice (though he questions its political viability), and distance learning. He reserves terse words for teacher unions, bodies he considers detrimental to requisite educational reforms.

His energy policy relies on alternate energy sources including nuclear power, natural gas, clean coal, even hydrogen. He holds solar and wind power as promising complimentary energy sources, but doubts that either represent a panacea. In an early bid for support in the Iowa Caucuses, he touts his support for ethanol subsidies and production. Romney is highly critical of the cap and trade legislation passed by the House last year, and also dismisses the wisdom of a more direct carbon tax. However, he does tout the potential of a carbon tax coupled with reciprocal tax offsets in sales or payroll taxes.

No Apology is a serious work that departs from standard campaign biographies. Indeed, its closest parallel is arguably Obama’s Audacity of Hope. Romney intersperses brief biographical footnotes throughout, but its policy-orientation reigns. While he shares anecdotes from his failed 2008 presidential run, he avoids ex post facto analysis, and also strays from foreshadowing a future run for the nation’s highest office. This means there is no dissection of how his Mormon faith proved an obstacle among conservative Christian voters, or his repositioning on major social issues that led many to conclude that he was a “flip-flopper” of convenience. He does make several references to his faith, and reaffirms his opposition to abortion and same-sex marriage.

The irony is that Romney’s 2008 campaign largely trumpeted social and military issues, peripheral to his core competency as an economic turn-around agent. In No Apology, he takes the opportunity to press the reset button, recasts himself as a more centrist, pragmatic technocrat, and lays the groundwork for a repeat presidential run during the most devastating economic times since the Great Depression.

Paul Waldman at Tapped:

Foreign policy is not really Romney’s wheelhouse, but I suppose he feels the need to check off the “Grrr…I hate terrorists!” box. Look for him to pivot away from foreign policy, particularly since Republicans are having a hard time saying Obama is destroying our standing in the world. The GOP primary will be about the domestic scourge — the socialist tide oozing from the White House — and who can capture the spirit of the aggrieved, bitter, angry white man. Romney could make an argument about why, with his managerial experience and business success, he’d be a better steward of government and the economy than his opponents. But that’s not the ground on which they’re going to be competing.

I imagine Romney looks at his probable opponents with frustration, knowing that he’s far more capable of being president than your Palins and Pawlentys. Though we have yet to locate the depth of pandering to which Mitt won’t sink, his efforts at identity politics just don’t come as naturally as they do to the others. But he’s certainly going to give it the old college try

Razib Khan at Secular Right:

Here are my odds: I think Mitt Romney has a 1 out of 5 chance of gaining the nomination in 2012 for the presidency if the Democrats do not pass health care legislation. This is in my estimation the modal probability in the field for individuals which we know of. That is, I think this is better odds than any other potential candidates currently on offer (remember, I think there’s a serious chance that a “dark horse” may rise to prominence and win the nomination, so I would still put “someone-we-don’t-know/aren’t talking about” as a higher probability than any of the “top-tier”). If the Democrats do pass the individual mandate I put Romney’s odds at 1 in 20, and would guess that other 2012 hopefuls such as Tim Pawlenty would now have a greater probability of gaining the nomination (for what it’s worth, I think Sarah Palin’s odds are around 1 in 20 with our without health care).

UPDATE: David Frum in FrumForum on Rove’s book

1 Comment

Filed under Books, Political Figures

Michael Chabon Is Paged Several Times

Michael Weingrad in the Jewish Review of Books:

So why don’t Jews write more fantasy literature? And a different, deeper but related question: why are there no works of modern fantasy that are profoundly Jewish in the way that, say, The Lion, the Witch, and the Wardrobe is Christian? Why no Jewish Lewises, and why no Jewish Narnias?

My interest in these questions is partly personal. Tolkien and Lewis loomed large in my childhood and, as I read them to my own children, I wonder what they ought to mean to us as Jews. But my thoughts are also stimulated by the recent publication of some apparent exceptions to the rule: from the United States, The Magicians, a fantasy novel for adults by novelist and critic Lev Grossman, and from Israel, Hagar Yanai’s Ha-mayim she-bein ha-olamot (The Water Between the Worlds), the acclaimed second installation of a projected fantasy trilogy, which, when it is finished, will be the first such trilogy in Hebrew.

Asking these questions is hardly frivolous when fantasy, especially children’s fantasy, has today become a multi-billion dollar industry. In addition to the perennial popularity of Lewis and Tolkien, there is of course the publishing tsunami that is J. K. Rowling, as well as the lesser but still remarkable successes of recent fantasy authors such as Philip Pullman and Jonathan Stroud, all magnified immensely by the films based on their books. Fantasy is big business.

Indeed, one wonders why, amidst all the initiatives to solve the crisis in Jewish continuity, no one has yet proposed commissioning a Jewish fantasy series that might plumb the theological depths like Lewis or at least thrill Jewish preteens with tales of Potterish derring-do. Granted, popularity is rarely cooked to order and religious allegory sometimes backfires (a mother once wrote Lewis that her nine year old son had guiltily confessed to loving Aslan the lion more than Jesus). But still, what non-electronic phenomenon has held the attention of more children (and not a few adults) during the last ten years, than Rowling’s tales of Hogwarts? And, as Tom Shippey has shown in Tolkien: Author of the Century, the Lord of the Rings trilogy consistently tops readers’ polls of their most beloved books. Why the apparent aversion to producing such well-received books by the People of the Book?

Some readers may have already expressed surprise at my assertion that Jews do not write fantasy literature. Haven’t modern Jewish writers, from Kafka and Bruno Schulz to Isaac Bashevis Singer and Cynthia Ozick, written about ghosts, demons, magic, and metamorphoses? But the supernatural does not itself define fantasy literature, which is a more specific genre. It emerged in Victorian England, and its origins are best understood as one of a number of cultural salvage projects that occurred in an era when modern materialism and Darwinism seemed to drive religious faith from the field. Religion’s capacity for wonder found a haven in fantasy literature.

The experience of wonder, of joy and delight on the part of the reader, has long been recognized as one of the defining characteristics of the genre. This wonder is connected with a world, with a place of magic, strangeness, danger, and charm; and whether it is called Perelandra, Earthsea, Amber, or Oz, this world must be a truly alien place. As Ursula K. Leguin says: “The point about Elfland is that you are not at home there. It’s not Poughkeepsie.”

To answer the question of why Jews do not write fantasy, we should begin by acknowledging that the conventional trappings of fantasy, with their feudal atmosphere and rootedness in rural Europe, are not especially welcoming to Jews, who were too often at the wrong end of the medieval sword. Ever since the Crusades, Jews have had good reasons to cast doubt upon the romance of knighthood, and this is an obstacle in a genre that takes medieval chivalry as its imaginative ideal.

It is not only that Jews are ambivalent about a return to an imaginary feudal past. It is even more accurate to say that most Jews have been deeply and passionately invested in modernity, and that history, rather than otherworldliness, has been the very ground of the radical and transformative projects of the modern Jewish experience. This goes some way towards explaining the Jewish enthusiasm for science fiction over fantasy (from Asimov to Silverberg to Weinbaum there is no dearth of Jewish science fiction writers). George MacDonald’s Phantastes, thought by some to be the first fantasy novel ever written, begins with a long epigraph from Novalis in which he celebrates the redemptive counter-logic of the fairytale: “A fairytale [Märchen] is like a vision without rational connections, a harmonious whole . . . opposed throughout to the world of rational truth.” Contrast Herzl’s dictum that “If you will it, it is no Märchen.” The impulse in the latter is that of science fiction—the proposal of what might be—and indeed Herzl’s one novel Old-New Land was a utopian fiction about the future State of Israel.

Joe Carter at First Things

Will at The League

Samuel Goldman at PomoCon:

The article has taken heat from fans of the many Jewish fantasy authors. But most of them have missed the point. Weingrad isn’t asking whether Jews write fantasy or enjoy reading it. Instead, he’s concerned with why there aren’t any compelling fantasy “worlds” that incorporate Jewish folklore and tropes the way Narnia and Tolkein’s Middle Earth develop  Christian ones.

But is that really such a puzzle? In the first place, the landscape of most fantasy novels is essentially the numinous forest of the Teutonic Dark Ages. It is not so much a Christian world as a world on the cusp of Christianity: a pagan Götterdämmerung.

Jews can, of course, appropriate this setting for literary purposes. But I don’t think it has the same imaginative gravity that it does for Christians. Similarly, the warrior values that animate a lot of fantasy are not traditionally Jewish. One could, I suppose, write a story around around a learned rabbi–but surely that would not be as interesting as one focused on knights, errant wizards, and chieftains of mounted hordes. Finally, as Weingrad notes, there’s no fantasy without evil. And Jewish teaching on this subject is extremely ambiguous; unlike some Christian doctrines, Judaism tends to deny evil as a force independent of and opposed to God.

For these reasons, Jews drawn to speculative writing may have an affinity for the science fiction over fantasy. The technological rationalism and optimism of much science fiction is also, in a way, more American–and America has offered the broadest field for Jewish literary efforts since World War II.

But Weingrad neglects a “fantasy” genre founded by Jews, and arguably shaped by Jewish preoccupations. That’s the superhero comic book invented in the 1930s by the likes of Robert Kahn–Bob Kane to you. There could never be a Jewish Narnia that would preserve the features many readers find compelling (I confess that I always vastly preferred Tolkein, whose work is richer and less didactic). But the universes of Superman, Batman, and the rest are worthy counterparts.

Spencer Ackerman:

His name is Michael Chabon, you fool. Or Jonathan Lethem. Or, as my friend Sam Goldman insightfully observes, perhaps you ought to pick up a superhero comic. Practically every iconic superhero was created by Jews. Wrap your mind around two Jews, Jerry Siegel and Joe Shuster, creating an invincible hero called Superman in 1932.

Probably more accurately, the Jewish CS Lewises are named Stan “Stanley Lieber” Lee and Jack “Jacob Kurtzberg” Kirby. Weingrad is asking the wrong question if he wants a one-to-one transposal of the Christian Lewis to Jewish creators, who are less likely to create direct parables because an impulse to convert doesn’t exist in Judaism, but questions of justice, power and responsibility — stuff that concerns Jews, I hear — are central to the Marvel Universe. Back when Jews still lived in urban enclaves, Lee and Kirby created the Thing, the first Jewish superhero (and probably the first Jew in space), to bring the ersatz-Lower East Side values of “Yancy Street” to the gentile masses and give the Yancy Street kids a relatable hero to look up to — the world scorned him for his appearance, but he was brave and strong and moral and had more heart than anyone. I don’t need to explain the civil rights allegory of the X-Men, but you could make quite the engaging Haggadah out of the “Days of Future Past” storyline. If it’s young-adult fiction you want, practically nothing will get kids into the habit of reading, and reading passionately, than comic books.

Farah Mendlesohn:

Don’cha just love utter rubbish? Simply off the top of my head:
Robert Silverberg; Esther Freisner; Peter Davison; Michael Burstein; Neil Gaiman; Marge Piercy (great grand-daughter of a Rabbi); Peter Beagle; Charlie Stross and Michael Chabon (by pure coincidence I have been reading Gentleman of the Road, set in the ninth century kingdom of the Kazars and, as he says in a post-script “Jews with Swords”, all day today).

I am sure others will add more.

Abigail Nussbaum:

Farah Mendlesohn pours out her wrath on Michael Weingrad’s article “Why There is No Jewish Narnia” in the inaugural issue of Jewish Review of Books, and its assertion that Weingard “cannot think of a single major fantasy writer who is Jewish, and there are only a handful of minor ones of any note. To no other field of modern literature have Jews contributed so little.”  Allegedly a review of Lev Grossman’s The Magicians and Hagar Yanai’s HaMaim SheBeyn HaOlamot (The Water Between the Worlds), the second volume in an Israeli YA fantasy trilogy, Weingard treats only briefly with his two subjects and mostly uses them as a backdrop to his theory of Judaism being a far less hospitable environment than Christianity for the development of a fantastic tradition, of “all the elements necessary for classic fantasy—magic, myth, dualism, demonic forces, strange worlds, and so forth.”  Farah responds by listing a dozen Jewish fantasy authors off the top of her head, and commenters to her post contribute quite a few more, but though it seems likely, reading between the lines of Weingard’s article, that these authors are either wholly unfamiliar to him or that he would be surprised to learn of their Jewishness, I’m not sure that this listing accurately addresses the point Weingard is trying to make.

It seems clear to me that the essay’s title is meant in earnest, and that Weingard is specifically hunting for Jewish authors of the same caliber, fame, and influence over the genre as Tolkien and Lewis, of which there are indeed none.  More importantly, when Weingard calls for a Jewish Narnia, he is calling for “works of modern fantasy that are profoundly Jewish in the way that, say, The Lion, the Witch, and the Wardrobe is Christian”.  As Jo Walton says in the comments to Farah’s post, “I think it’s more useful to ask what Jewish fantasy stories there are than what Jewish fantasy writers,” and again the answer would be that there are precious few.  The most well-regarded, famous and influential Jewish fantasy writer working today is probably Neil Gaiman, but Jewish elements in his fiction are few and far between, and the folklore and myths he draws on in his work are mostly Christian or pagan, with some forays into various Eastern traditions.  Which is understandable when one considers that Weingard’s argument about the relative paucity of the Jewish fantastic tradition is undeniable.  It’s a religion and a culture that is not only less rooted in and concerned with the numinous than Christianity is–the afterlife, for example, is treated in Judaism almost as an afterthought, and receives very little attention in the halacha or in Jewish scholarship–but whose folk tales and traditions seem to have almost no fantastic component.  There’s a reason that the golem and the dybbuk get so much play whenever the Jewish fantastic is mentioned–because there’s not much else out there, and very little that is common currency even among Jews.

None of this is to say that I don’t sympathize with Farah’s exasperation with “Why There is No Jewish Narnia.”  Weingard’s essay is riddled with so many staggering assumptions, sweeping generalizations, and plain untruths that even its most self-evident arguments come to seem suspect.  Chief among these is the fact that though he deftly analyzes the philosophical differences between Christianity and Judaism which render the former so suitable to the Tolkienian mode of fantasy by noting that Christianity is rooted in a dualism between good and evil, whereas Judaism balks at placing any power on an equal standing, or even in opposition, to God, Weingard touches only lightly on the real-world factors that discouraged Jews from exploring the fictional avenues that Tolkien and Lewis did.  To put it bluntly, there is no way that a Jewish writer working in the early decades of the twentieth century could have produced The Lord of the Rings, a work steeped in a yearning for a lost pastoral world that Jews, who have for various reasons tended to congregate in urban and commercial centers, would have had little or no experience of.  Similarly, the naked didacticism and unabashed proselytizing of the Narnia books is entirely antithetical to Judaism, an anti-missionary religion.  One might as well ask why there is no Jewish Divine Comedy.

Ross Douthat:

Part and parcel of Judaism’s resistance to explorations in the realm of faerie, he goes on, is a discomfort with the semi-dualism that’s necessary to classic fantasy — the idea of a Devil figure, in other words, who seems capable of actually conquering the mortal world (be it Narnia or Middle-Earth, Fionavar or Osten Ard) and binding it permanently in darkness. As Weingrad notes, correctly I think: “Christianity offers a far more developed tradition of evil as a supernatural, external, autonomous force than does Judaism, whose Satan (or Samael or Lilith or Ashmedai) are limited in their power and usually rather obedient to God’s wishes.” Tolkien’s Sauron makes sense in a Christian universe; he makes less sense in a Jewish one.

But once you add up these insights, they jostle uneasily with Weingrad’s professed desire for a Jewish Tolkien, or a Jewish Lewis. What he seems to have demonstrated is that modern fantasy depends on Christianity, or at least a Christian-pagan synthesis, for its forms, conventions, and traditions. This suggests that you could write a novel that embodies a kind of Jewish critique of fantasy — in much the same way that China Miéville’s novels are a kind of Marxist critique of Tolkien, Marion Zimmer Bradley’s “Mists of Avalon” was a feminist critique of Arthurian-based fantasy, Philip Pullman’s “His Dark Materials” trilogy is an atheist’s critique of C.S. Lewis, and so on. (And indeed, Weingrad’s essay reads Lev Grossman’s new novel “The Magicians” as a kind of crypto-Jewish critique of Narnia and/or Harry Potter.) But the genre itself will remain irreducably Christian, and a truly Judaic fantasy would have to belong to, or invent, a different genre altogether.

UPDATE: Rod Dreher

Jonah Goldberg at The Corner

Ilya Somin

Charlie Jane Anders at IO9

Razib Khan at Science Blogs

UPDATE #2: E.D. Kain at The League

6 Comments

Filed under Books, Religion