Tag Archives: Razib Khan

And The Silver Medal Goes To China… Or Does It?

Heather Horn at The Atlantic with the round-up

Ryan Avent at DiA at The Economist:

CHINA has, at long last, surpassed Japan in terms of nominal GDP, making the Chinese economy the world’s second largest. Second quarter output in China came in at $1.337 trillion, to Japan’s $1.288 trillion (Japan’s output was larger in the first quarter; for comparison, America’s second quarter nominal output was $3.522 trillion). The shift is sure to be widely discussed and widely misinterpreted. There are a few key things to mention.

First, while Chinese growth has been truly impressive in recent decades, the rapid overtaking of the Japanese economy also reflects years of disappointing growth there. This story is as much about Japan’s travails (and the risk to other rich economies facing a descent into Japanese-style stagnation) as it is China’s boom.

Second, China remains a very poor country in per capita terms. It uses over four times as many citizens as America to produce less than half America’s output. That’s a bit misleading—urban productivity in China doesn’t lag America by quite as much but is offset by the limited growth contribution of China’s hundreds of millions of rural poor. Still, the total output figures encourage observers to vastly overstate the developmental level of the Chinese economy.

Joshua Keating at Foreign Policy:

The world economy reached a major milestone Monday when China officially became the world’s second-largest economy, displacing Japan, which has held the title for more than four decades. The recognition of China’s new status came after the Japanese government reported that, after a quarter of slow economic growth, the country’s annual gross domestic product (GDP) was estimated to be around $1.28 trillion, slightly below China’s $1.33 trillion. Do all countries use the same method for estimating GDP?

They’re supposed to. The System of National Accounts (SNA), a set of guidelines developed jointly by the United Nations, the European Commission, the International Monetary Fund (IMF), the Organization for Economic Co-operation and Development, and the World Bank, specifies the methods by which countries measure the size of their economies.

There are two main methods for estimating GDP. One involves looking at production. This includes the value of the goods produced by all the firms in the country, the added value of government work projects, and — particularly in developing countries — the value of goods produced for personal consumption, like the crops grown by subsistence farmers. Not all wealth counts toward GDP. For instance, if you build a new house, that’s considered value added to the economy.  If a pre-existing house increases in value, the owner may be better off, but the country’s GDP is unaffected. Of course, companies often have a vested interest in exaggerating their profits, so reliable figures can sometimes be tough to calculate.

The other method of calculating GDP involves measuring total consumption of products by a country’s population. Since it relies mostly on household surveys, this method also has flaws. People tend to underreport the amount they spend on alcohol and cigarettes, for instance. But hopefully, the two measures should come up with close to the same number and when the results from the two approaches are compiled, they should give you a pretty good idea of the size of a country’s economy.

[…]

But for most countries, there’s no international legal authority to ensure that statistical offices are following the SNA guidelines, and international economists largely have to rely on self-reported numbers. While no one’s disputing China’s new status, the country has often been suspected of cooking its books. Although China is not a member of the OECD, it does cooperate with the organization in producing statistics according to the SNA guidelines.

Those guidelines are updated every few years. The most recent edition, which was made in 2008 and has so far only been implemented by Australia, was revised so that a firm’s investments in research and development are considered added value. This means that as the new standard is implemented worldwide over the next four years or so, many countries will see their GDP numbers increase by as much as 1 percent. That’s one way to stimulate growth.

Joe Weisenthal at Business Insider:

Let’s just put some of today’s headlines about Japan’s GDP being surpassed by Chinese GDP in perspective.

In the quarter, Japan had economic output of $1.28 trillion, or $10,085 per capital, based on a population of 127 million.

China?

It had economic output of $1.337 trillion for the quarter, but a population of about $1.3 billion, so per-capita output of… $1000, about a 1/10th as big.

Let us know when China passes Albania.

Derek Scissors at Heritage:

It’s true that simple GDP does matter. The increasing size of China’s economy means the entire world is now affected by its voracious demand for oil, iron ore, and other commodities, as well as its low-cost supply of consumer electronics, clothing, and other goods.

But for successful economic development, what matters far more is the wealth of individuals and families. Japanese economic weakness is not shown in its still impressive 3rd place in world GDP but in its roughly 40th place on measures of personal income. From an economy once thought better managed and better performing than the U.S., the average citizen of Japan is now poorer than the average citizen of Mississippi. American citizens are noticeably richer than citizens of most other developed countries, such as in the EU. But Japan, in particular, is moving backward.

In contrast to Japan’s 20 years of weakness, there has been stunning growth in Chinese GDP per capita for 30 years. Yet China is still a developing economy. Chinese GDP per capita, even adjusted for purchasing power, is about 15 percent the level of the U.S. Further, GDP per capita actually exaggerates China’s performance.

The PRC’s incomplete data revisions undermine comparisons but, from the middle of 2000 to the middle of 2010, GDP per capita increased by more than 9500 yuan or, at present exchange rates, another $2800 in annual income. However, urban disposable income increased less than 6800 yuan, or about $2000 in annual income. And rural income increased less than 2000 yuan, or $600 in annual income.

Razib Khan at Discover

Robert Reich at Wall Street Pit:

Think of China as a giant production machine that’s growing 10 percent a year (this year, somewhat less). The machine sucks in more and more raw materials and components from rest of world – it’s now the world’s #1 buyer of iron ore and copper, and close to the #1 importer of crude oil – and spews out a growing mountain of stuff, along with huge environmental problems.

But because the Chinese consume a smaller and smaller proportion of this stuff, it has to be exported to consumers elsewhere (Europe, North America, Japan) to keep the Chinese working. Much of the money China earns by selling it around the world is reinvested in factories, roads, trains, and power plants that enlarge China’s capacity to produce far more. Another big portion is lent to or invested in the rest of the world (helping to finance America’s budget deficit at very low cost).

But this can’t go on. China’s workers won’t allow it. Workers in other nations who are losing their jobs won’t allow it, either.

The answer is not simply more labor agitation in China or an upward revaluation of China’s currency relative to the dollar. The problem is bigger. All over the world, we’re witnessing a growing gap between production and consumption, while the environment continues to degrade. The Chinese machine is fast heading for a breakdown only because it’s growing fastest.

Advertisements

Leave a comment

Filed under China

Another Week, Another Ross Douthat Column

Ross Douthat at NYT:

There’s an America where it doesn’t matter what language you speak, what god you worship, or how deep your New World roots run. An America where allegiance to the Constitution trumps ethnic differences, language barriers and religious divides. An America where the newest arrival to our shores is no less American than the ever-so-great granddaughter of the Pilgrims.

But there’s another America as well, one that understands itself as a distinctive culture, rather than just a set of political propositions. This America speaks English, not Spanish or Chinese or Arabic. It looks back to a particular religious heritage: Protestantism originally, and then a Judeo-Christian consensus that accommodated Jews and Catholics as well. It draws its social norms from the mores of the Anglo-Saxon diaspora — and it expects new arrivals to assimilate themselves to these norms, and quickly.

These two understandings of America, one constitutional and one cultural, have been in tension throughout our history. And they’re in tension again this summer, in the controversy over the Islamic mosque and cultural center scheduled to go up two blocks from ground zero.

The first America, not surprisingly, views the project as the consummate expression of our nation’s high ideals. “This is America,” President Obama intoned last week, “and our commitment to religious freedom must be unshakeable.” The construction of the mosque, Mayor Michael Bloomberg told New Yorkers, is as important a test of the principle of religious freedom “as we may see in our lifetimes.”

The second America begs to differ. It sees the project as an affront to the memory of 9/11, and a sign of disrespect for the values of a country where Islam has only recently become part of the public consciousness. And beneath these concerns lurks the darker suspicion that Islam in any form may be incompatible with the American way of life.

This is typical of how these debates usually play out. The first America tends to make the finer-sounding speeches, and the second America often strikes cruder, more xenophobic notes. The first America welcomed the poor, the tired, the huddled masses; the second America demanded that they change their names and drop their native languages, and often threw up hurdles to stop them coming altogether. The first America celebrated religious liberty; the second America persecuted Mormons and discriminated against Catholics.

But both understandings of this country have real wisdom to offer, and both have been necessary to the American experiment’s success. During the great waves of 19th-century immigration, the insistence that new arrivals adapt to Anglo-Saxon culture — and the threat of discrimination if they didn’t — was crucial to their swift assimilation. The post-1920s immigration restrictions were draconian in many ways, but they created time for persistent ethnic divisions to melt into a general unhyphenated Americanism.

The same was true in religion. The steady pressure to conform to American norms, exerted through fair means and foul, eventually persuaded the Mormons to abandon polygamy, smoothing their assimilation into the American mainstream. Nativist concerns about Catholicism’s illiberal tendencies inspired American Catholics to prod their church toward a recognition of the virtues of democracy, making it possible for generations of immigrants to feel unambiguously Catholic and American.

So it is today with Islam. The first America is correct to insist on Muslims’ absolute right to build and worship where they wish. But the second America is right to press for something more from Muslim Americans — particularly from figures like Feisal Abdul Rauf, the imam behind the mosque — than simple protestations of good faith.

Too often, American Muslim institutions have turned out to be entangled with ideas and groups that most Americans rightly consider beyond the pale. Too often, American Muslim leaders strike ambiguous notes when asked to disassociate themselves completely from illiberal causes.

Jennifer Rubin at Commentary:

Granted, the “conservative spot” on the Gray Lady’s op-ed pages comes with plenty of caveats and handcuffs. So if a conservative columnist is going to last more than a year, he will have to suppress his harshest impulses toward the left and a great deal of his critical faculties. The result is likely to be condescending columns like today’s by Ross Douthat.

He posits two Americas: “The first America tends to make the finer-sounding speeches, and the second America often strikes cruder, more xenophobic notes.” The first cares about the Constitution, and the second is composed of a bunch of racist rubes, it seems. “The first America celebrated religious liberty; the second America persecuted Mormons and discriminated against Catholics.” Yes, you can guess which are the opponents of the Ground Zero mosque. (I was wondering if he was going to write, “The first America helped little old ladies across the street; the second America drowned puppies.)

I assume that this is what one has to do to keep your piece of turf next to such intellectual luminaries as Maureen Dowd, but it’s really the worst straw man sort of argument since, well, the last time Obama spoke. But he’s not done: “The first America is correct to insist on Muslims’ absolute right to build and worship where they wish. But the second America is right to press for something more from Muslim Americans — particularly from figures like Feisal Abdul Rauf, the imam behind the mosque — than simple protestations of good faith.” OK, on behalf of the rubes in Second America, enough!

Second America — that’s 68% of us — recognizes (and we’ve said it over and over again) that there may be little we can do legally (other than exercise eminent domain) to halt the Ground Zero mosque, but that doesn’t suspend our powers of judgment and moral persuasion. Those who oppose the mosque are not bigots or constitutional ruffians. They merely believe that our president shouldn’t be cheerleading the desecration of “hallowed ground” (”first America’s” term, articulated by Obama) or averting our eyes from the funding sources of the imam’s planned fortress.

E.D. Kain at Balloon Juice:

Leaving aside the obvious fact that Muslims have actually been migrating here for many years and sprouting up second and third and seventh generations in the United States, this use of a specific instance – the Cordoba Center – to segue into a larger framework in which American Muslims writ large are not doing enough to assimilate is, to put it bluntly, nonsense. (And are no American Muslims a part of Second America? Then they must all be part of First America…unless we’re working on creating a Third America. That’s possible, too.)

He goes on:

Too often, American Muslim institutions have turned out to be entangled with ideas and groups that most Americans rightly consider beyond the pale. Too often, American Muslim leaders strike ambiguous notes when asked to disassociate themselves completely from illiberal causes.

I wonder what exactly qualifies as ‘too often’? What percentage of Muslim institutions fit this criteria? Furthermore, what bearing does this have on the question of the Ground Zero Mosque?

For Muslim Americans to integrate fully into our national life, they’ll need leaders who don’t describe America as “an accessory to the crime” of 9/11 (as Rauf did shortly after the 2001 attacks), or duck questions about whether groups like Hamas count as terrorist organizations (as Rauf did in a radio interview in June). And they’ll need leaders whose antennas are sensitive enough to recognize that the quest for inter-religious dialogue is ill served by throwing up a high-profile mosque two blocks from the site of a mass murder committed in the name of Islam.

They’ll need leaders, in other words, who understand that while the ideals of the first America protect the e pluribus, it’s the demands the second America makes of new arrivals that help create the unum.

Leaders like this guy, perhaps? I mean, if we’re going to just lump everyone of a particular faith together and cherry-pick the ‘leaders’ who we feel best represent them, why not pick the loudest of the bunch?

And if we can identify the group’s leaders, then we can pigeonhole the entire population’s motives. We can attribute the words of the few to the motives of the many. We can rile up “second America” against the fearful Other. And we can do it all quite nicely by calling into question the sincerity of the group’s desire to properly integrate into mainstream culture. It’s their fault, after all, that they haven’t made it all the way. Why would any real American want to build a mosque so near ground zero?

Jamelle Bouie at Tapped:

But this is bad history; the nativists of 19th-century America weren’t much interested in having “new arrivals adapt to Anglo-Saxon culture,” rather, the nativists of mid-19th-century America wanted to keep immigrants off of American shores. In its 1856 platform, the American Party — otherwise known as the “Know-Nothing Party” — pushed for the mass expulsion of poor immigrants, and declared that “Americans must rule America, and to this end native-born citizens should be selected for all State, Federal, and municipal offices of government employment, in preference to all others.”Likewise, nativism in the late 19th century was preoccupied with keeping foreigners out of the United States. Here is a passage from the constitution the Immigration Restriction League, formed in 1894 by a handful of Harvard graduates:

The objects of this League shall be to advocate and work for further judicious restriction or stricter regulation of immigration, to issue documents and circulars, solicit facts and information on that subject, hold public meetings, and to arouse public opinion to the necessity of a further exclusion of elements undesirable for citizenship or injurious to our national character.

This seems completely obvious, but nativists and xenophobes have never been interested in seeing immigrants join our nation and culture as Americans. Our modern-day nativists — as represented by the previously mentioned Tea Party activists — see “undesirable” immigrants as pests to be dealt with, not potential Americans:

“Instead of finding bugs in our beds, we’re finding home invaders,” said Tony Venuti, a Tucson radio host who attached a huge sign to the fence that told immigrants to head to Los Angeles, where they will be more welcome, and even offered directions for getting there.

Contra Douthat, nativists and xenophobes have never been integral to assimilating immigrants. That distinction goes to the assimilationists of American life who understood — and understand — that “American-ness” can be learned and adopted. Different assimilationists had different approaches to bringing immigrants into American life, but they were united by a common view of America as an open society.

Jonathan Bernstein:

Jamelle Bouie has a great post up this morning about assimilation and immigration, riffing off of Ross Douthat’s column.  Douthat’s claim is that the America of high-minded ideals is at odds with cultural protectionism, and while the latter is bigoted and small-minded, it also winds up having the virtue of forcing newer immigrants and minorities in general to conform to American cultural norms (including those high-minded ideals).  I think Bouie is a bit harsher than necessary to Douthat, who isn’t exactly warm towards those who he says use discrimination and persecution to get their way.  But I also think Bouie is correct: Douthat’s claim that it’s the nativists who have indirectly encouraged assimilation through intimidation may not be entirely wrong, but it’s a somewhat strained reading of history — the nativists didn’t want assimilation, they wanted (and often got) exclusion.  And Bouie is right that Douthat’s history ignores that those in Douthat’s “first” America (the one with the high-minded ideals) have almost always supported and worked to achieve assimilation.

But I think both of them are missing the main actors here: the immigrants themselves, who in almost all cases have been pretty desperate to assimilate as quickly as possible.  That was true of the great immigration waves in the late 19th and early 20th centuries, and it’s true of the great immigration wave now.  Of course, each group has had various cultural bits and pieces they keep with them (bits and pieces which generally are gobbled up by the larger American culture, so that everyone eats tacos and bagels), and each group has minorities within their minority who resist assimilation, keeping the old language and practices alive (although often radically altered, sometimes without anyone realizing it) even as most of the community drifts — runs — towards America.

Matt Welch at Reason:

Such John Edwards-style reductionism inevitably sends off alarm bells, but this paragraph in particular smelled funny to me:

[B]oth understandings of this country have real wisdom to offer, and both have been necessary to the American experiment’s success. During the great waves of 19th-century immigration, the insistence that new arrivals adapt to Anglo-Saxon culture — and the threat of discrimination if they didn’t — was crucial to their swift assimilation. The post-1920s immigration restrictions were draconian in many ways, but they created time for persistent ethnic divisions to melt into a general unhyphenated Americanism.

Is this true? To find out I asked an old college newspaper buddy of mine, the immigration historian Christina Ziegler-McPherson, who is author of a recent book called Americanization in the States: Immigrant Social Welfare Policy, Citizenship, and National Identity in the United States, 1908-1929. She e-mailed me back 2,500 words; thought I’d pass along a few of them:

Douthat is full of crap in several ways:

1. […] [F]or much of the 19th century, except in the big cities like New York, immigrants and natives had little contact and less competition with one another, because the country was growing and was so physically big. […]

This is not to discount the nativism (i.e. the Know Nothing party) of the mid-1850s but that was a city phenomenon and was driven mostly by anti-Catholicism inspired by famine Irish immigration. Some people didn’t like “clannish” Germans but as long as they weren’t Catholic, no one complained as much. Nativism in the mid-19th century was basically an anti-Irish phenomenon. AND, in some ways, it wasn’t anti-immigrant, just anti-Catholic, and sought to slow down the integration of immigrants into the polity (i.e., by requiring a much longer period of residency before naturalization, and this was as much an elite anti-machine politics idea as anti-Irish or anti-immigrant).

Also, there was no real “national” culture until after the Civil War (and this developed gradually with industrialism and the spread of a mass media and eventually mass consumption) so there could be no “insistence” on immigrants assimilating. Who the heck is he talking about? […]

2. Nativism, and some aspects of the Americanization movement of the WWI period (especially the more coercive stuff) has always had the effect of making immigrants cling more tightly to their cultures, their languages, traditions. This is both basic psychology and is historically accurate and can be documented for many groups.

Any attack on religion (which frankly, is what anti-Muslim talk is, it’s not anti-ethnic, because there’s no ethnic group called “Muslim”) encourages more orthodoxy, not less, and is totally counter-preductive, because of the 1st Amendment. The American Catholic Church became the authoritarian institution that it was in the 19th and early 20th centuries in large part because of Anglo-American Protestants insisting that Protestantism and Americanism were synonymous and attacking Irish Catholics. […]

[T]he harder you push for “assimilation”…the more you get orthodoxy, extremism, alienation.

3. Post-WWI restrictions were separate from the Americanization movement and were not designed to encourage assimilation (although a few people did realize that assimilation might happen if immigrants were cut off from rejuvenating contact with their home cultures). The 1924 and 1929 restrictions were explicitly racist (and I mean that in the 19th century biological sense, as in, we don’t want our blood being contaminated by alien blood which is different and is incompatible with ours.)…Eugenics heavily influenced the 1924 and 1929 acts and eugenicists were the statisticians who determined the specific quotas for each group. […]

The problem of course with Douthat, besides that he has no idea about what he’s talking about, is he’s so vague. When in the 19th century? Which groups? Where? What created these “persistent ethnic divisions”? Are these institutional, cultural, created by policy? Who the heck can tell?

Alex Knapp:

First off all, you’ll note that Little Italy’s and Chinatowns still exist all over the country. There are neighborhoods on the East Coast where you’re lost if you don’t speak Italian, and neighborhoods on the West Coast where you’re lost if you don’t speak Chinese. There are people living in these neighborhoods who are still hostile to outsiders, and lots of different ethnic neighborhoods share this characteristic.And it’s important to realize that these ethnic enclaves, with their insularity and hostility to integration, not only failed to “swiftly assimilate”, they failed to swiftly assimilate because of discrimination. Because of the law and because of cultural prejudice, Italians, Chinese, Irish, Slavs, Jews and other immigrants were very often not hired by their neighbors. As a consequence, Italians hired Italians, Chinese hired Chinese, Irish hired Irish, etc. Immigrant neighborhoods were often either ignored by the police or shaken down by them for protection money. In either case, in a desperate desire for order, immigrants turned to organized crime for protection from criminals or the police. While the Mafioso were brutal, greedy and ruthless, they also kept order on the streets and took care of widows, etc. (You can actually see a similar pattern in Palestine, where Hamas was voted into power as not only a reaction against Israel and the PLO, but also because while Arafat’s government was growing rich and corrupt on foreign aid payments, Hamas was building schools and medical clinics for the destitute.)

Indeed, the combination of the rise of organized crime and the hositility from “second America” more likely delayed the integration of immigrant communities. That integration really didn’t start to happen until various immigrant populations simply became numerous enough to vote their preferred candidates into office, such as the experience of the Irish in Boston.

Another example of Douthat’s willful glossing over of history comes in his discussion of the Mormon experience:

The same was true in religion. The steady pressure to conform to American norms, exerted through fair means and foul, eventually persuaded the Mormons to abandon polygamy, smoothing their assimilation into the American mainstream.

This is a great example of how to write something that’s factually true, but rhetorically false. Given his tone, you’d think that Mormon families were getting some glares and “tsks tsks” at PTA meetings. The reality, of course, is that Mormons were violently persecuted, first by their neighbors in Illinois and Missouri, and then by the U.S. Army after they moved to Utah. The Mormons weren’t “persuaded” to abandon polygamy, they were forced to after the United States Congress disincorporated the Church and seized all Mormon assets. Mormon leaders fought the Act in the Courts, but the Supreme Court ultimately upheld Congress’ Act. It was only then that the Mormons capitulated to the government. And it was a long time before Mormons got over that and became more assimilated into every day American life. And even at that, there was considerable hostility among quarters in the Republican Party against Mitt Romney because of his religion.

I definitely agree that, as a culture, Americans should encourage the integration of immigrant populations into every day life. But that integration isn’t built on fear and peer pressure. It’s built on tolerance, a shared ideal of freedom, and the embrace of new cultures into the rich tapestry of American life. Integration comes from delicious foods at Indian buffets and the required learning about American government before an immigrant takes his oath of citizenship. It certainly doesn’t come from protesting Mosques or putting up No Irish Need Apply signs on the door of your business.

UPDATE: Conor Friedersdorf at Andrew Sullivan’s place

Douthat responds to Friedersdorf

Razib Khan at Secular Right

1 Comment

Filed under History, Immigration, Mainstream, New Media, Religion

“Don’t Trust One-Offs”

Jim Manzi in City Journal:

[…]

Another way of putting the problem is that we have no reliable way to measure counterfactuals—that is, to know what would have happened had we not executed some policy—because so many other factors influence the outcome. This seemingly narrow problem is central to our continuing inability to transform social sciences into actual sciences. Unlike physics or biology, the social sciences have not demonstrated the capacity to produce a substantial body of useful, nonobvious, and reliable predictive rules about what they study—that is, human social behavior, including the impact of proposed government programs.

The missing ingredient is controlled experimentation, which is what allows science positively to settle certain kinds of debates. How do we know that our physical theories concerning the wing are true? In the end, not because of equations on blackboards or compelling speeches by famous physicists but because airplanes stay up. Social scientists may make claims as fascinating and counterintuitive as the proposition that a heavy piece of machinery can fly, but these claims are frequently untested by experiment, which means that debates like the one in 2009 will never be settled. For decades to come, we will continue to be lectured by what are, in effect, Keynesian and non-Keynesian economists.

Over many decades, social science has groped toward the goal of applying the experimental method to evaluate its theories for social improvement. Recent developments have made this much more practical, and the experimental revolution is finally reaching social science. The most fundamental lesson that emerges from such experimentation to date is that our scientific ignorance of the human condition remains profound. Despite confidently asserted empirical analysis, persuasive rhetoric, and claims to expertise, very few social-program interventions can be shown in controlled experiments to create real improvement in outcomes of interest.

[…]

After reviewing experiments not just in criminology but also in welfare-program design, education, and other fields, I propose that three lessons emerge consistently from them.

First, few programs can be shown to work in properly randomized and replicated trials. Despite complex and impressive-sounding empirical arguments by advocates and analysts, we should be very skeptical of claims for the effectiveness of new, counterintuitive programs and policies, and we should be reluctant to trump the trial-and-error process of social evolution in matters of economics or social policy.

Second, within this universe of programs that are far more likely to fail than succeed, programs that try to change people are even more likely to fail than those that try to change incentives. A litany of program ideas designed to push welfare recipients into the workforce failed when tested in those randomized experiments of the welfare-reform era; only adding mandatory work requirements succeeded in moving people from welfare to work in a humane fashion. And mandatory work-requirement programs that emphasize just getting a job are far more effective than those that emphasize skills-building. Similarly, the list of failed attempts to change people to make them less likely to commit crimes is almost endless—prisoner counseling, transitional aid to prisoners, intensive probation, juvenile boot camps—but the only program concept that tentatively demonstrated reductions in crime rates in replicated RFTs was nuisance abatement, which changes the environment in which criminals operate. (This isn’t to say that direct behavior-improvement programs can never work; one well-known program that sends nurses to visit new or expectant mothers seems to have succeeded in improving various social outcomes in replicated independent RFTs.)

And third, there is no magic. Those rare programs that do work usually lead to improvements that are quite modest, compared with the size of the problems they are meant to address or the dreams of advocates.

Razib Khan at Discover Magazine:

A friend once observed that you can’t have engineering without science, making the whole concept of “social engineering” somewhat farcical. Jim Manzi has an article in City Journal which reviews the checkered history of scientific methods as applied to humanity, What Social Science Does—and Doesn’t—Know: Our scientific ignorance of the human condition remains profound.

The criticisms of a scientific program as applied to humanity are deep, and two pronged. As Manzi notes the “causal density” of human phenomena make teasing causation from correlation very difficult. Additionally, the large scale and humanistic nature of social phenomena make them ethically and practically impossible to apply methods of scientific experimentation. This is why social scientists look for “natural experiments,” or involve extrapolation from “WEIRD” subject pools. But as Manzi notes many of the correlations themselves are highly context sensitive and not amenable to replication.

Arnold Kling:

If David Brooks is going to give out his annual awards for most important essays, I would nominate this one.

One of the lessons that is implicit in the essay (and that I think that Manzi ought to make explicit) is, “Don’t trust one-offs.” That is, do not draw strong conclusions based on a single experiment, no matter how well constructed. Instead, wait until many experiments have been conducted, in a variety of settings and using a variety of techniques. An example of a one-off that generated a lot of recent excitement is the $320,000 kindergarten teacher study.

Mark Kleiman:

I’m sorry, but this is incoherent. What is this magical “trial-and-error process” that does what scientific inquiry can’t do? On what basis are we to determine whether a given trial led to successful or unsuccessful results? Uncontrolled before-and-after analysis, with its vulnerability to regression toward the mean? And where is the mystical “social evolution” that somehow leads fit policies to survive while killing off the unfit?

Without any social-scientific basis at all (unless you count Gary Becker’s speculations) we managed to expand incarceration by 500 percent between 1975 and the present. Is that fact – the resultant of a complicated interplay of political, bureaucratic, and professional forces – to be accepted as evidence that mass incarceration is a good policy, and the “counter-intuitive” finding that, past a given point, expanding incarceration tends, on balance, to increase crime be ignored because it’s merely social science? Should the widespread belief, implemented in policy, that only formal treatment cures substance abuse cause us to ignore the evidence to the contrary provided by both naturalistic studies and the finding of the HOPE randomized controlled trial that consistent sanctions can reliably extinguish drug-using behavior even among chronic criminally-active substance abusers?

For some reason he doesn’t specify, Manzi regards negative trial results as dispositive evidence that social innovators are silly people who don’t understand “causal density.” So he accepts – as well he should – the “counter-intuitive” result that juvenile boot camps were a bad idea. But why are those negative results so much more impressive than the finding that raising offenders’ reading scores tends to reduce their future criminality?

Surely Manzi is right to call for metholological humility and catholicism; social knowledge does not begin and end with regressions and controlled trials. But the notion that prejudices embedded in policies reflect some sort of evolutionary result, and therefore deserve our respect when they conflict with the results of careful study, really can’t be taken seriously.

Manzi responds at The American Scene:

This leads Kleiman to ask:

What is this magical “trial-and-error process” that does what scientific inquiry can’t do? On what basis are we to determine whether a given trial led to successful or unsuccessful results? Uncontrolled before-and-after analysis, with its vulnerability to regression toward the mean? And where is the mystical “social evolution” that somehow leads fit policies to survive while killing off the unfit?

I devoted a lot of time to this related group of questions in the forthcoming book. The shortest answer is that social evolution does not allow us to draw rational conclusions with scientific provenance about the effectiveness of various interventions, for methodological reasons including those that Kleiman cites. Social evolution merely renders (metaphorical) judgments about packages of policy decisions as embedded in actual institutions. This process is glacial, statistical and crude, and we live in the midst of an evolutionary stream that we don’t comprehend. But recognition of ignorance is superior to the unfounded assertion of scientific knowledge.

Kleiman then goes on to ask this:

Without any social-scientific basis at all (unless you count Gary Becker’s speculations) we managed to expand incarceration by 500 percent between 1975 and the present. Is that fact – the resultant of a complicated interplay of political, bureaucratic, and professional forces – to be accepted as evidence that mass incarceration is a good policy, and the “counter-intuitive” finding that, past a given point, expanding incarceration tends, on balance, to increase crime be ignored because it’s merely social science?

My answer is yes, it should be counted as evidence, but that it is not close to dispositive. We can not glibly conclude that we now live in the best of all possible worlds. I devoted several chapters to trying to lay out some principles for evaluating when, why and how we should consider, initiate and retrospectively evaluate reforms to our social institutions.

Kleiman’s last question is:

Should the widespread belief, implemented in policy, that only formal treatment cures substance abuse cause us to ignore the evidence to the contrary provided by both naturalistic studies and the finding of the HOPE RCT that consistent sanctions can reliably extinguish drug-using behavior even among chronic criminally-active substance abusers?

My answer to this is no, and a large fraction of the article (and the book) is devoted to making the case that exactly such randomized trials really are the gold standard for the kind of knowledge that is required to make reliable, non-obvious predictions that rationally outweigh settled practice and even common sense. The major caveat to the evaluation of this specific program (about which Kleiman is deeply expert) is whether or not the experiment has been replicated, as I also make the argument that replication is essential to drawing valid conclusions from such experiments – the principle that Arnold Kling called in a review of the article, “Don’t trust one-offs.”

Steven Pearlstein at WaPo

Steve Sailer:

That all sounds plausible, but I’ve been a social science stats geek since 1972, when the high school debate topic that year was education, so I’m aware that Manzi’s implications are misleading.

First, while experiments are great, correlation studies of naturally occurring data can be extremely useful. Second, a huge number of experiments have been done in the social sciences.

Third, the social sciences have come up with a vast amount of knowledge that is useful, reliable, and nonobvious, at least to our elites.

For example, a few years, Mayor Bloomberg and NYC schools supremo Joel Klein decided to fix the ramshackle admissions process to the gifted schools by imposing a standardized test on all applicants. Blogger Half Sigma immediately predicted that the percentage of Asians and whites admitted would rise at the expense of blacks and Hispanics, which would cause a sizable unexpected political problem for Bloomberg and Klein. All that has come to pass.

This inevitable outcome should have been obvious to Bloomberg and Klein from a century of social science data accumulation, but it clearly was not obvious to them.

No, the biggest problem with social science research is not methodological; it’s that we just don’t like the findings. The elites of America don’t like what the social sciences have uncovered about, say, crime, education, discrimination, immigration, and so forth.

Andrew Sullivan:

But there is a concept in this crucial conservative distinction between theoretical and practical wisdom that has been missing so far: individual judgment. A social change can never be proven in advance to be the right answer to a pressing problem. We can try to understand previous examples; we can examine large randomized trials; but in the end, we have to make a judgment about the timeliness and effectiveness of certain changes. It is the ability to sense when such a moment is ripe that we used to call statesmanship. It is that quality that no wonkery can ever replace.

It is why we elect people and not algorithms.

Will Wilkinson:

In my thinking about the contrasts between Rawlsian and Hayekian liberalism, I’ve begun to think about the former as the “liberalism of respect” and the latter as the “liberalism of discovery.” The liberalism of discovery recognizes the pervasiveness of our ignorance and the necessity of liberty for the emergence of useful knowledge. I would argue that the ideal of a social order embodying respect for persons as free and equal–the ideal of the liberalism of respect–comes to seem appealing only after a society has attained a certain level of economic development and general education, and these are largely consequences of a prior history of the relatively free play of the mechanisms of discovery celebrated by liberals like Hayek and Jim. But liberals of respect have tended to overlook the conditions under which people come to find the their favored ideal worth aspiring to, and so have tended to fail to acknowledge in their theories of justice the role of the institutions of discovery in creating and maintaining a society of mutual respect and fair reciprocity.

Via Sullivan, Kleiman responds to Manzi:

I suppose I’ll have to read Manzi’s book to find out how existing practices constitute “(metaphorical) judgments about packages of policy decisions;” I’m inclined to regard them as mostly mere resultants-of-forces, with little claim to deference. (Thinking that existing arrangements somehow embody tacit knowledge is a different matter from thinking that big changes are likely to have unexpected consequences, mostly bad, though both are arguments for caution about grand projects.)

I’m also less unimpressed than Manzi is with how much non-obvious stuff about humans living together the social sciences have already taught us. That supply and demand will, without regulation, come into equilibrium at some price was a dazzling and radical social-scientific claim when Adam Smith and his friends suggested it. So too for Ricardo’s analysis of comparative advantage, which, while it doesn’t fully support the free-trade religion that has grown up around it, at least creates a reasonable presumption that trade is welfare-increasing.

The superiority of reward to punishment in changing behavior; the importance of cognitive-dissonance and mean-regression effects in (mis)shaping individual and social judgments; the intractable problem of public-goods contributions; the importance of social capital; the problems created by asymmetric information and the signaling processes it supports; the crucial importance of focal points; the distinction between positive-feedback and negative-feedback processes; the distinction between zero-sum and variable-sum games; the pervasiveness of imperfect rationality in the treatment of risk and of time-value, and the consequent possibility that people will, indeed, damage themselves voluntarily: none of these was obvious when proposed, and all of them are now, I claim, sufficiently well-established to allow us to make policy choices based on them, with some confidence about likely results. (So, for that matter, is the Keynesian analysis of insufficient demand and what to do about it.)

But, if I read Manzi’s response correctly, my original comment allowed a merely verbal disagreement to exaggerate the extent of the underlying substantive disagreement. If indeed Manzi can offer some systematic analysis of how to look at existing institutions, figure out which ones might profitably be changed, try out a range of plausible changes, gather careful evidence about the results of those changes, and modify further in light of those results, then Manzi proposes what I would call a “scientific” approach to making public policy.

Manzi responds to Kleiman:

I think that he is reading my response correctly. While I don’t think that “all I meant” was that “you shouldn’t read some random paper in an economics or social-pysch journal” and propose X, I certainly believe that. Most important, I acknowledge enthusiastically his “sauce for the goose is sauce for the gander” point that the recognition of our ignorance should apply to things that I theorize are good ideas, as much as it does to anything else. The law of unintended consequences does not only apply to Democratic proposals.

In fact, I have argued for supporting charter schools instead of school vouchers for exactly this reason. Even if one has the theory (as I do) that we ought to have a much more deregulated market for education, I more strongly hold the view that it is extremely difficult to predict the impacts of such drastic change, and that we should go one step at a time (even if on an experimental basis we are also testing more radical reforms at very small scale). I go into this in detail for the cases of school choice and social security privatization in the book.

Megan McArdle:

I have been reading with great interest the back-and-forth between Mark Kleiman and Jim Manzi on how much more humble we ought to be about new policy changes.  I know and like both men personally, as well as having a healthy respect for two formidable intellects, so I’ve greatly enjoyed the exchange.

Naturally, this has put me in mind of just how hard it is to predict policy outcomes–how easy it is to settle on some intuitively plausible outcome, without considering some harder-to-imagine countervailing force.

Consider the supply-siders.  The thing is intuitively appealling; when we get more money from working, we ought to be willing to.  And it is a mathematical truism that revenue must maximize at some point.  Why couldn’t we be on the right-hand side of the Laffer Curve?

It was entirely possible that we were; unfortunately, it wasn’t true.  And one of the reasons that supply-siders failed was that they were captivated by that one appealing intuition.  In economics, it’s known as the “substitution effect”–as your wages go up, leisure becomes relatively more expensive relative to work, so you tend to do less of the former, more of the latter.

Unfortunately, the supply-siders missed another important effect, known as the “income effect”.  Which is to say that as you get richer, you demand more of some goods, and less of others.  And one of the goods you demand more of as you get richer–a class of goods known as “superior goods”–is leisure.

Of course, some people are so driven that they will simply work until they drop in the traces.  But most people like leisure.  So say you raise the average wage by 10%.  Suddenly people are bringing home 10% more income every hour.  Now, maybe this makes them all excited so they decide to work more.  On the other hand, maybe they decide they were happy at their old income, and now they can enjoy their old income while working 9% fewer hours.  Cutting taxes could actually reduce total output.

(We will not go into the question of how much most people can control their hours–on the one hand, most people can’t, very well, but on the other hand, those who can tend to be the high-earning types who pay most of your taxes.)

Which happens depends on which effect is stronger.  In practice, apparently neither was strong enough to thoroughly dominate, at least not when combined with employers who still demanded 40 hour weeks.  You do probably get a modest boost to GDP from tax cuts.  But you also get falling tax revenue.

Naturally, even-handedness demands that I here expose the wrong-headedness of some liberal scheme.  And as it happens, I have one all ready in the oven here:  the chimera of reducing emergency room use.  The argument that health care reform could somehow at least partially pay for itself by keeping people from using the emergency room was always dubious.  As I, and others argued, there’s not actually that much evidence that people use the emergency room because they are uninsured–rather than because they have to work during normal business hours, are poor planners, or are afraid that immigration may somehow find them at a free clinic.

Moreover, we argued, non-emergent visits to the emergency room mostly use the spare capacity of trauma doctors; the average cost may be hundreds of dollars, but the marginal cost of slotting ear infections in when you don’t happen to have a sucking chest wound, is probably pretty minimal.

But even I was not skeptical enough to predict what actually happened in Massachusetts, which is that emergency room usage went up after they implemented health care reform.

Leave a comment

Filed under Go Meta

Popping Out Children Like Brooms In The Sorcerer’s Apprentice Part Of “Fantasia”

Bryan Caplan at Wall Street Journal:

Amid the Father’s Day festivities, many of us are privately asking a Scroogely question: “Having kids—what’s in it for me?” An economic perspective on happiness, nature and nurture provides an answer: Parents’ sacrifice is much smaller than it looks, and much larger than it has to be.

Most of us believe that kids used to be a valuable economic asset. They worked the farm, and supported you in retirement. In the modern world, the story goes, the economic benefits of having kids seem to have faded away. While parents today make massive personal and financial sacrifices, children barely reciprocate. When they’re young, kids monopolize the remote and complain about the food, but do little to help around the house; when you’re old, kids forget to return your calls and ignore your advice, but take it for granted that you’ll continue to pay your own bills.

Many conclude that if you value your happiness and spending money, the only way to win the modern parenting game is not to play. Low fertility looks like a sign that we’ve finally grasped the winning strategy. In almost all developed nations, the total fertility rate—the number of children the average woman can expect to have in her lifetime—is well below the replacement rate of 2.1 children. (The U.S. is a bit of an outlier, with a rate just around replacement.) Empirical happiness research seems to validate this pessimism about parenting: All else equal, people with kids are indeed less happy than people without.

[…]

A closer look at the General Social Survey also reveals that child No. 1 does almost all the damage. Otherwise identical people with one child instead of none are 5.6 percentage points less likely to be very happy. Beyond that, additional children are almost a happiness free lunch. Each child after the first reduces your probability of being very happy by a mere .6 percentage points.

Happiness researchers also neglect a plausible competing measure of kids’ impact on parents’ lives: customer satisfaction. If you want to know whether consumers are getting a good deal, it’s worth asking, “If you had to do it over again, would you make the same decision?” The only high-quality study of parents’ satisfaction dates back to a nation-wide survey of about 1,400 parents by the Research Analysis Corp. in 1976, but its results were stark: When asked, “If you had it to do over again, would you or would you not have children?” 91% of parents said yes, and only 7% expressed buyer’s remorse.

You might think that everyone rationalizes whatever decision they happened to make, but a 2003 Gallup poll found that wasn’t true. When asked, “If you had to do it over again, how many children would you have, or would you not have any at all?” 24% of childless adults over the age of 40 wanted to be child-free the second time around, and only 5% more were undecided. While you could protest that childlessness isn’t always a choice, it’s also true that many pregnancies are unplanned. Bad luck should depress the customer satisfaction of both groups, but parenthood wins hands down.

The main problem with parenting pessimists, though, is that they assume there’s no acceptable way to make parenting less work and more fun. Parents may feel like their pressure, encouragement, money and time are all that stands between their kids and failure. But decades’ worth of twin and adoption research says the opposite: Parents have a lot more room to safely maneuver than they realize, because the long-run effects of parenting on children’s outcomes are much smaller than they look.

Think about everything parents want for their children. The traits most parents hope for show family resemblance: If you’re healthy, smart, happy, educated, rich, righteous or appreciative, the same tends to be true for your parents, siblings and children. Of course, it’s difficult to tell nature from nurture. To disentangle the two, researchers known as behavioral geneticists have focused on two kinds of families: those with twins, and those that adopt. If identical twins show a stronger resemblance than fraternal twins, the reason is probably nature. If adoptees show any resemblance to the families that raised them, the reason is probably nurture.

Parents try to instill healthy habits that last a lifetime. But the two best behavioral genetic studies of life expectancy—one of 6,000 Danish twins born between 1870 and 1900, the other of 9,000 Swedish twins born between 1886 and 1925—found zero effect of upbringing. Twin studies of height, weight and even teeth reach similar conclusions. This doesn’t mean that diet, exercise and tooth-brushing don’t matter—just that parental pressure to eat right, exercise and brush your teeth after meals fails to win children’s hearts and minds.

Parents also strive to turn their children into smart and happy adults, but behavioral geneticists find little or no evidence that their effort pays off. In research including hundreds of twins who were raised apart, identical twins turn out to be much more alike in intelligence and happiness than fraternal twins, but twins raised together are barely more alike than twins raised apart. In fact, pioneering research by University of Minnesota psychologist David Lykken found that twins raised apart were more alike in happiness than twins raised together. Maybe it’s just a fluke, but it suggests that growing up together inspires people to differentiate themselves; if he’s the happy one, I’ll be the malcontent.

David Mills at First Things:

“Many conclude that if you value your happiness and spending money, the only way to win the modern parenting game is not to play. Low fertility looks like a sign that we’ve finally grasped the winning strategy,” writes Bryan Caplan in The Breeder’s Cup, published in The Wall Street Journal‘s weekend edition. Readers will remember the widely promoted study of a few years ago declaring that having children made parents less happy or, depending on the writer, outright unhappy.

In yet another example of the mainline press picking up on what our own David Goldman had been saying for years in his Spengler columns (search “demography” and “population”) and in Demographics and Depression, Caplan argues that the studies we have show that this equation of limited families with the good life is wrong. After challenging the study I just mentioned, he writes:

Happiness researchers also neglect a plausible competing measure of kids’ impact on parents’ lives: customer satisfaction. If you want to know whether consumers are getting a good deal, it’s worth asking, “If you had to do it over again, would you make the same decision?”

The only high-quality study of parents’ satisfaction dates back to a nation-wide survey of about 1,400 parents by the Research Analysis Corp. in 1976, but its results were stark: When asked, “If you had it to do over again, would you or would you not have children?” 91% of parents said yes, and only 7% expressed buyer’s remorse.

You might think that everyone rationalizes whatever decision they happened to make, but a 2003 Gallup poll found that wasn’t true. When asked, “If you had to do it over again, how many children would you have, or would you not have any at all?” 24% of childless adults over the age of 40 wanted to be child-free the second time around, and only 5% more were undecided.

While you could protest that childlessness isn’t always a choice, it’s also true that many pregnancies are unplanned. Bad luck should depress the customer satisfaction of both groups, but parenthood wins hands down.

He goes on to argue that parents could make themselves happier, but his reason is a little uncomfortable: “the long-run effects of parenting on children’s outcomes are much smaller than they look.”

Matt Zeitlin:

For Caplan, you start with what economists think, then see what voters think and then chalk up the difference as evidence as irrationality. He, in accordance with this general faith in what economists think, proposes all sorts of reforms that would take decision making power out of the hands of the public and into the hands of economists, like giving the Council of Economic Advisors “the power to invalidate legislation as ‘uneconomical,’”  and giving college graduates an extra vote.

The problem for Caplan is that economists generally agree that voters are rational and that insomuch as voters are misinformed, it tends to cancel itself out. So, Caplan has to make a further argument for why we should trust economists on policy issues, but why we should ignore their collective judgment on whether or not voters are rational.

He seems to have pulled this trick again in regard to his arguments for why, despite the rather robust finding in happiness research that having kids decreases reported happiness, people should have lots of kids. And, to take this argument even further, that they should have kids for selfish reasons; they should do so for themselves. Now, he makes some arguments for why this research need not lead to non-fertile outcomes and why the stuff that leads to the negative happiness effects due to having kids isn’t all that useful or important, but we are still left with another case where Caplan is making a significant, contestable point that is at odds with what economists’ think about the issue.

I’m not saying that this is a bad thing in and of itself, but it sure puts Caplan in a weird position where he agrees with economists on everything except the stuff he devotes his time to researching and writing about.

Will Wilkinson at Megan McArdle’s place:

Bryan really struggles with the fact that children tend to have a negative effect on self-reported happiness. (Most economists are dismissive of survey evidence, but, to his credit, Bryan isn’t.) He tries to minimize the damage this finding does to his argument by pointing out that the negative effect is small for the first kid, and even smaller for additional kids. But it remains that if one is trying to maximize happiness, no kids appears to be the best bet and fewer is better than more.

Of course, self-reported happiness is just one dubiously reliable piece of evidence about the effect of kids on well-being. The trouble with Bryan’s strategy in the WSJ essay is that he resorts to even less reliable survey evidence to support his position. He cites polls that show that people tend not to report regrets about having had kids, but that a large majority of those who have not had kids say that would choose to have them if they “had it to do over again.” Now, Darwinian logic suggests that the belief that one would be better off without children will not tend to be widespread. That is, as Harvard psychologist Daniel Gilbert argues, we should expect to find conviction in the satisfactions of parenthood to be strong and all-but universal whether or not those convictions reflect the truth. So one would want check them against, say, the self-reported life satisfaction of those with and without children. Or, if one is inclined to think like an economist, one might say “talk is cheap” and check these beliefs against what people actually do.

In that case, what one finds is that increases in average levels of education, levels of disposable income, gender equality, and access to birth control — that is, increases in the ability of people (and especially women) to deliberately control the conditions of their own lives — generally lead people to choose a smaller rather than larger number of children. As far as I can tell, Bryan’s response is that it “lacks perspective” to take at face value this truly striking tendency of choice under conditions of increasing personal control. If Bryan really thinks rising education, wealth, and gender equality have somehow made us worse at evaluating the costs and benefits of children, he probably ought to turn in his economist card.

None of this is to say that there aren’t excellent reasons to have families larger than the relatively small rich-country norm. It’s just that these tend not to be the kinds of reasons economists consider “selfish.”

Razib Khan at Discover:

Being an economist he focuses on rational individual behavior, but I want to point to another issue: group norms. In the left-liberal progressive post-graduate educated circles which I come into contact with in the USA childlessness is not uncommon, and bears no stigma (on the contrary, I hear often of implicit and explicit pressure on graduate students to forgo children for the sake of maximizing labor hour input into research over one’s lifetime from advisors). On the other hand, the norm of a two-child family is also very strong, and going above replacement brings upon you a fair amount of attention. The rationale here is often environmental, more children = more of a carbon footprint. But my friend Gregory Cochran has stated that as an individual who is well above replacement whose social milieu is more conservative that he perceives that more than two children is also perceived as deviant in Middle American society. In other words, the reasoning may differ, but the intuition is the same (in Italy the reasoning mostly involves the cost of raising children from the perspective of parents, both in cash and time).

The numbers in the General Social Survey tell the tale. In 1972 42% of adults had more than 2 children. In 2008 32% did. More relevantly in 1972 47% of adults between the ages of 25 and 45 had more than 2 children. In 2008 the figure for that age group is 27% for those with more than 2 children.

Of course the numbers mix up a lot of different subcultures. One anecdote I’d like to relate is a conversation I had with a secular left-of-center university educated couple. They expressed the aspiration toward 4 children. I asked them out of curiosity about the population control issue, and they looked at me like I was joking. It needs to be mentioned that they weren’t American, rather they were from a Northern European country which seems on the exterior to resemble the United States very much. But it reminds us of the importance of group norms in shaping life choices and expectations, the implicit framework for our explicit choices.

All that goes to my point that Bryan Caplan’s project will be most effective among demographics geared toward prioritizing individual choice, analysis and utility maximization, as opposed to relying upon the wisdom of group norms. Economists, quantitative social science and finance types, libertarians, etc.

Andrew Leonard at Salon:

But that leads us to the truly deranged part of the argument: Caplan believes that we shouldn’t be working so hard to be good parents, because, hey, the quality of our parenting doesn’t really make any difference to how our kids turn out. He cites a few behavioral genetics studies, mostly on sets of twins, that purport to show very little difference in outcomes when children with the same genetic makeup are raised by different parents.

It’s the ultimate get-out-of-jail-free parenting card!

Many find behavioral genetics depressing, but it’s great news for parents and potential parents. If you think that your kids’ future rests in your hands, you’ll probably make many painful “investments” — and feel guilty that you didn’t do more. Once you realize that your kids’ future largely rests in their own hands, you can give yourself a guilt-free break.

If you enjoy reading with your children, wonderful. But if you skip the nightly book, you’re not stunting their intelligence, ruining their chances for college or dooming them to a dead-end job. The same goes for the other dilemmas that weigh on parents’ consciences. Watching television, playing sports, eating vegetables, living in the right neighborhood: Your choices have little effect on your kids’ development, so it’s OK to relax. In fact, relaxing is better for the whole family. Riding your kids “for their own good” rarely pays off, and it may hurt how your children feel about you.

So we should have more kids, and spend less time and effort parenting them, and just kick back and enjoy the fruits of our non-labors, presumably generated when our offspring stroke our egos by visiting us in our nursery homes and telling us how cool we were for setting no curfews and letting them play videogames until they keeled over in front of their computers from lack of proper hydration.

I guess I do see a certain libertarian world view integrity here. If you judge modes of political organization from the foundational precept that good government is impossible, then why not also assume that good parenting is, if not impossible, merely useless? If you’re going to dump John Maynard Keynes then why not throw out Dr. Spock as well?

Who knew that lazy permissiveness would become a calling card of libertarian parenting ideology? I’ll concede that there are tendencies towards over-parenting in American culture that verge on the extreme, and could quite possibly be counter-productive. The frantic competition to get your baby into the best pre-school in Manhattan — a struggle that seems to start before the child is even born — may not be the most efficient use of resources. Caplan is certainly right on one point, we should relax more — relaxed parents, I would submit, are better parents. But to leap from that starting point to the contention that our choices have little effect on our children’s development seems, in my own anecdotal understanding of the world, to go too far. Even worse, it smacks of an abdication of responsibility, a surrender to the worst kind of easy rationalization. Good parenting is hard, but even if the differences we are making are only perceivable at the margins, that shouldn’t absolve us from the necessity and pleasure of making any effort at all. It’s not a winning or losing strategy: It’s a way to be in the world.

Tony Woodlief at Megan McArdle’s place:

To be sure, there are too many parents who, despite their children, remain narcissistic nimrods. But the nature of parenting is to beat that out of you. There’s just no time to spend on ourselves, at least not like we would if we didn’t have babies to wash and toys to clean up, usually in the middle of the night, after impaling our feet on them.
People are inherently self-centered, and especially in a peaceful, prosperous society, this easily leads to self-indulgence that in turn can make us weak and ignoble. There’s something to be said for ordeals — like parenting, or marriage, or tending the weak and broken — which push us into an other-orientation. When we have to care for someone, we get better at, well, caring for people. It actually takes practice, after all. I’m still trying to get it right.
I suppose an economist could make this all fit. What I’m really saying, the economist might contend, is that one element of my self-interest, in addition to enjoying a leisurely meal, and plenty of sleep, and the ability to go away on vacations without worrying about who will watch the youngsters, is not becoming (remaining?) a jerk. Kids certainly don’t guarantee that won’t happen, but they help mitigate the risk. And if we conceptualize that self-interest, in turn, as happiness, we’re right back where we started.
But I wonder if the questions would change. Instead of asking parents and non-parents whether they are happy right now, we might ask whether they are becoming more like the people they want to be. And then we might see children not as factors that may or may not be contributing to our happiness, but as opportunities to practice what most of us — perhaps me most of all — need to do more often, which is to put someone else before ourselves.

James Poulos at Ricochet:

The unique thing about children is that, at one and the same time, we both share our identity with them and don’t. In some ways, there’s no one more deeply ‘identical’ than you and your child. But in other ways, of course — marvelously awesome and frustrating ways — there’s no one more deeply different, precisely because your kid’s differences with you are so intimately connected to your own differences with him or her. That’s the amazing foundation of an astonishing kind of relationship. There’s nothing like it. Not even friendship compares.

In our broader relations with at first undifferentiated ‘others’, it makes us happy to develop friendships. There’s something inherent, I think, in the connection between friendship and happiness. A happy society is one where lots and lots of people are friends with each other — where there are ‘thick webs of social trust,’ as an academic might say. And yes, a happy family is one where relations are of a kind we’d describe in popular shorthand as ‘friendly’…but that’s not quite it. That’s not the full story, is it?

Happiness might not be beside the point of life. But the stubborn persistence of family leads me to believe that oftentimes we humans want, maybe desperately, maybe in spite of ourselves, something more than happiness. If we ignore this in our political life, we’re going to wind up with a system of laws and a power structure that cuts against the grain of that powerful human longing. And the costs of that might be very high indeed.

James Joyner:

Moreover, I’d argue that the definitions of “happiness” at work here are dubious.

My 17-month-old woke up a few minutes ago and interrupted my writing.  She does that kind of thing a lot.   Indeed, pretty much every morning.   And when she does, I have to stop what I’m doing, usually at an inopportune time.   And that makes me unhappy!

Is this momentary inconvenience outweighed by the joy she brings me?  Of course.

But having kids means constant diversion from doing what you want to be doing at any given moment.  And having multiple children, I’m reliably told, tends to increase that phenomenon geometrically.    Indeed, parents the world over agree:  Kids are a giant pain in the ass!

Those of us who are reasonably intelligent and had children by conscious decision knew all this going in.   Indeed, one of the amusing things about impending first-time fatherhood is the number of people who dispense the advice “It’ll change your life!”   But that doesn’t make the sacrifices and trade-offs less real.

While I’m a social scientist by training, I’m not a sociologist, much less steeped in the literature in question here.   But I don’t know that it’s possible to develop measures to quantify the thousands of instances of “unhappiness” that come from the annoyances of parenthood and the less frequent but far more potent joys.   And I certainly don’t think it’s possible to do it in a way that satisfies an economist’s notion of “happiness.”

UPDATE: Jennifer Senior in New York Magazine

Ezra Klein

1 Comment

Filed under Families

BeFri 4 Never?

Hilary Stout at NYT:

Most children naturally seek close friends. In a survey of nearly 3,000 Americans ages 8 to 24 conducted last year by Harris Interactive, 94 percent said they had at least one close friend. But the classic best-friend bond — the two special pals who share secrets and exploits, who gravitate to each other on the playground and who head out the door together every day after school — signals potential trouble for school officials intent on discouraging anything that hints of exclusivity, in part because of concerns about cliques and bullying.

“I think it is kids’ preference to pair up and have that one best friend. As adults — teachers and counselors — we try to encourage them not to do that,” said Christine Laycob, director of counseling at Mary Institute and St. Louis Country Day School in St. Louis. “We try to talk to kids and work with them to get them to have big groups of friends and not be so possessive about friends.”

“Parents sometimes say Johnny needs that one special friend,” she continued. “We say he doesn’t need a best friend.”

That attitude is a blunt manifestation of a mind-set that has led adults to become ever more involved in children’s social lives in recent years. The days when children roamed the neighborhood and played with whomever they wanted to until the streetlights came on disappeared long ago, replaced by the scheduled play date. While in the past a social slight in backyard games rarely came to teachers’ attention the next day, today an upsetting text message from one middle school student to another is often forwarded to school administrators, who frequently feel compelled to intervene in the relationship. (Ms. Laycob was speaking in an interview after spending much of the previous day dealing with a “really awful” text message one girl had sent another.) Indeed, much of the effort to encourage children to be friends with everyone is meant to head off bullying and other extreme consequences of social exclusion.

Elizabeth Scalia at The Anchoress:

Unreal. Read the article. The schools and “experts” are intrusive and unnatural. And sad.

This isn’t about what’s good for the children; it is about being better able to control adults by stripping from them any training in intimacy and interpersonal trust. Don’t let two people get together and separate themselves from the pack, or they might do something subversive, like…think differently.

This move against “best friends” is ultimately about preventing individuals from nurturing and expanding their individuality. It is about training our future adults to be unable to exist outside of the pack, the collective. The schools want you to think this is about potential bullying and the sadness of some children feeling “excluded.” But that is not what this is about.

As a kid I was the target of “the pack;” I know more than I care to about schoolyard bullies, and I can tell you that the best antidote to them was having a good friend. One good friend who shares your interests and ideas and sense of humor can erase the negative effects of the conform-or-die “pack” with which one cannot identify, “the pack” that cannot comprehend why one would not wish to join them and will not tolerate resistance.

Marc Thiessen at The American Enterprise Institute:

The absurdity of this approach is beyond measure. For one thing, it is completely at odds with real life. When kids grow up, they’re not going to be “friends with everyone.” In the real world there are people who will like you, and people who will dislike you; people who are kind, and people who are cruel; people you can trust, and people you can’t trust; people who will be there for you in good times and bad, and people who will abandon you when the going gets tough.

Childhood is when kids learn to recognize those different types of people, experience joys and disappointments of different kinds of friendships, and learn the social skills they will need to develop mature relationships later in life. As one psychologist quoted in the article puts it, “No one can teach you what a great friend is, what a fair-weather friend is, what a treacherous and betraying friend is except to have a great friend, a fair-weather friend or a treacherous and betraying friend.”

Denying kids the opportunity to have such experiences stunts their development. It also teaches kids to develop superficial relationships with lots of people, without learning how to develop deep bonds of meaning and consequence with anyone. Think about it: Who among us would tell their deepest, darkest secrets to “everyone”? Denying kids a “best friend” makes it harder to get through childhood—and makes it harder to be a successful adult one day as well.

Obviously, schools want to discourage cliques, ensure that no children are ostracized or bullied, and help those along who have trouble bonding with their peers. But the solution to such problems is not to discourage kids who do bond with their peers from doing so—or consciously separate them when they do.

This is but the latest misguided effort to protect children from the realities of life that only harms them in the long run. First came the trend to stop keeping score in childhood sports and give everyone a “participation trophy”—discouraging excellence and achievement, and shielding kids from the reality of winning and losing. Now comes a new fad of separating best friends—denying kids the magic of those first special friendships.

Jonah Goldberg at The Corner:

The stories are so familiar it makes no need to go into specifics. The experts of the helping professions want to tell you what to eat, what to drink, how to drive, how to talk, how to think. Sometimes they have a point, and as the father of a young child, I’m perfectly willing to concede that cliques and whatnot can be unhealthy or mean. But this really goes to 11.

Lisa Solod Warren at Huffington Post:

I was bulled in middle school and I have written a seminal article on school bullying for Brain, Child magazine a few years ago (well before the topic became so hot) and I say: Balderdash. Bullying is a problem; it can even be a tragedy. But the fact that a couple of kids bond as best friends is not the cause of bullying: stopping best friendships is not going to be the “cure.”

I have always counted myself fortunate to have a best friend as well as a couple of other women in my life with whom I am extremely close. I met my oldest best friend, Patti, when I was eight years old. Now, 46 years later, separated by hundreds of miles, we can still pick up the phone and start a conversation right in the middle. She knows my past and I know hers: all the dirty bits, the secrets, the moments we might not want to remember. She came to my father’s funeral a few months ago and I know that whatever I asked, whenever I asked it, she would be there. She knows the same of me.

She’s been there for me through a whole host of life changes. And those life changes began soon after we met in third grade. Had anyone discouraged me from clinging to her, or her to me, there would indeed have been hell to pay. And to what end? Is there any kind of scientific evidence that proves that being friends with an entire group of people without having one special person on whom one can absolutely rely is preferable? I wonder, actually, why on earth anyone would study this sort of thing in the first place. Bullying is about power. Power and insecurity. It’s something I found is often “taught” or handed down from generation to generation. Stopping kids from having one great friend whom they can trust to have their back is not going to prevent bullying. If anything, when a child doesn’t have someone he or she can trust -someone outside the family–bullying can seem even more onerous and scary than it already is. I never told my parents I was bullied. But Patti knew. And she defended me.

Razib Khan at Secular Right:

The article is in The New York Times. It’s a paper which usually tries really hard to pretend toward objective distance, but I get the sense that even the author of the piece was a bit confused by the weirdness which had infected the educational establishment.

Rod Dreher:

What crackpots. The idea that the way to decrease bullying is to deny children the opportunity to make a special friend or friends is cruel and crazy. It’s like saying that the way to stop school gun violence is to prevent anything that even looks like a gun from being brought to school — like, say, little toy soldiers pinned to a hat. No teacher or school would object to that. Oh, wait…

Leave a comment

Filed under Education, Families

So Easy A Caveman Can Do It

Science Magazine:

The morphological features typical of Neandertals first appear in the European fossil record about 400,000 years ago (13). Progressively more distinctive Neandertal forms subsequently evolved until Neandertals disappeared from the fossil record about 30,000 years ago (4). During the later part of their history, Neandertals lived in Europe and Western Asia as far east as Southern Siberia (5) and as far south as the Middle East. During that time, Neandertals presumably came into contact with anatomically modern humans in the Middle East from at least 80,000 years ago (6, 7) and subsequently in Europe and Asia.

Neandertals are the sister group of all present-day humans. Thus, comparisons of the human genome to the genomes of Neandertals and apes allow features that set fully anatomically modern humans apart from other hominin forms to be identified. In particular, a Neandertal genome sequence provides a catalog of changes that have become fixed or have risen to high frequency in modern humans during the last few hundred thousand years and should be informative for identifying genes affected by positive selection since humans diverged from Neandertals.

Substantial controversy surrounds the question of whether Neandertals interbred with anatomically modern humans. Morphological features of present-day humans and early anatomically modern human fossils have been interpreted as evidence both for (8, 9) and against (10, 11) genetic exchange between Neandertals and the presumed ancestors of present-day Europeans. Similarly, analysis of DNA sequence data from present-day humans has been interpreted as evidence both for (12, 13) and against (14) a genetic contribution by Neandertals to present-day humans. The only part of the genome that has been examined from multiple Neandertals, the mitochondrial DNA (mtDNA) genome, consistently falls outside the variation found in present-day humans and thus provides no evidence for interbreeding (1519). However, this observation does not preclude some amount of interbreeding (14, 19) or the possibility that Neandertals contributed other parts of their genomes to present-day humans (16). In contrast, the nuclear genome is composed of tens of thousands of recombining, and hence independently evolving, DNA segments that provide an opportunity to obtain a clearer picture of the relationship between Neandertals and present-day humans.

A challenge in detecting signals of gene flow between Neandertals and modern human ancestors is that the two groups share common ancestors within the last 500,000 years, which is no deeper than the nuclear DNA sequence variation within present-day humans. Thus, even if no gene flow occurred, in many segments of the genome, Neandertals are expected to be more closely related to some present-day humans than they are to each other (20). However, if Neandertals are, on average across many independent regions of the genome, more closely related to present-day humans in certain parts of the world than in others, this would strongly suggest that Neandertals exchanged parts of their genome with the ancestors of these groups.

Several features of DNA extracted from Late Pleistocene remains make its study challenging. The DNA is invariably degraded to a small average size of less than 200 base pairs (bp) (21, 22), it is chemically modified (21, 2326), and extracts almost always contain only small amounts of endogenous DNA but large amounts of DNA from microbial organisms that colonized the specimens after death. Over the past 20 years, methods for ancient DNA retrieval have been developed (21, 22), largely based on the polymerase chain reaction (PCR) (27). In the case of the nuclear genome of Neandertals, four short gene sequences have been determined by PCR: fragments of the MC1R gene involved in skin pigmentation (28), a segment of the FOXP2 gene involved in speech and language (29), parts of the ABO blood group locus (30), and a taste receptor gene (31). However, although PCR of ancient DNA can be multiplexed (32), it does not allow the retrieval of a large proportion of the genome of an organism.

The development of high-throughput DNA sequencing technologies (33, 34) allows large-scale, genome-wide sequencing of random pieces of DNA extracted from ancient specimens (3537) and has recently made it feasible to sequence genomes from late Pleistocene species (38). However, because a large proportion of the DNA present in most fossils is of microbial origin, comparison to genome sequences of closely related organisms is necessary to identify the DNA molecules that derive from the organism under study (39). In the case of Neandertals, the finished human genome sequence and the chimpanzee genome offer the opportunity to identify Neandertal DNA sequences (39, 40).

A special challenge in analyzing DNA sequences from the Neandertal nuclear genome is that most DNA fragments in a Neandertal are expected to be identical to present-day humans (41). Thus, contamination of the experiments with DNA from present-day humans may be mistaken for endogenous DNA. We first applied high-throughput sequencing to Neandertal specimens from Vindija Cave in Croatia (40, 42), a site from which cave bear remains yielded some of the first nuclear DNA sequences from the late Pleistocene in 1999 (43). Close to one million bp of nuclear DNA sequences from one bone were directly determined by high-throughput sequencing on the 454 platform (40), whereas DNA fragments from another extract from the same bone were cloned in a plasmid vector and used to sequence ~65,000 bp (42). These experiments, while demonstrating the feasibility of generating a Neandertal genome sequence, were preliminary in that they involved the transfer of DNA extracts prepared in a clean-room environment to conventional laboratories for processing and sequencing, creating an opportunity for contamination by present-day human DNA. Further analysis of the larger of these data sets (40) showed that it was contaminated with modern human DNA (44) to an extent of 11 to 40% (41). We employed a number of technical improvements, including the attachment of tagged sequence adaptors in the clean-room environment (23), to minimize the risk of contamination and determine about 4 billion bp from the Neandertal genome.

Eliza Strickland at Discover:

Researchers from Germany’s Max Planck Institute for Evolutionary Anthropology first sequenced the entire Neanderthal genome from powdered bone fragments found in Europe and dating from 40,000 years ago–a marvelous accomplishment in itself. Then, they compared the Neanderthal genome to that of five modern humans, including Africans, Europeans, and Asians. The researchers found that between 1 percent and 4 percent of the DNA in modern Europeans and Asians was inherited from Neanderthals, which suggests that the interbreeding took place after the first groups of humans left Africa.

Anthropologists have long speculated that early humans may have mated with Neanderthals, but the latest study provides the strongest evidence so far, suggesting that such encounters took place around 60,000 years ago in the Fertile Crescent region of the Middle East [The Guardian].

The study, published in Science and made available to the public for free, opens up new areas for research. Geneticists will now probe the function of the Neanderthal genes that humans have hung on to, and can also look for human genes that may have given us a competitive edge over Neanderthals.

Erik Trinkaus, an anthropologist at Washington University in St. Louis, who has long argued that Neanderthals contributed to the human genome, welcomed the study, commenting that now researchers “can get on to other things than who was having sex with who in the Pleistocene”

Science Blog:

Neanderthals lived in much of Europe and western Asia before dying out 30,000 years ago. They coexisted with humans in Europe for thousands of years, and fossil evidence led some scientists to speculate that interbreeding may have occurred there. But the Neanderthal DNA signal shows up not only in the genomes of Europeans, but also in people from East Asia and Papua New Guinea, where Neanderthals never lived.

“The scenario is not what most people had envisioned,” Green said. “We found the genetic signal of Neanderthals in all the non-African genomes, meaning that the admixture occurred early on, probably in the Middle East, and is shared with all descendants of the early humans who migrated out of Africa.”

The study did not address the functional significance of the finding that between 1 and 4 percent of the genomes of non-Africans is derived from Neanderthals. But Green said there is no evidence that anything genetically important came over from Neanderthals. “The signal is sparsely distributed across the genome, just a ‘bread crumbs’ clue of what happened in the past,” he said. “If there was something that conferred a fitness advantage, we probably would have found it already by comparing human genomes.”

The draft sequence of the Neanderthal genome is composed of more than 3 billion nucleotides–the “letters” of the genetic code (A, C, T, and G) that are strung together in DNA. The sequence was derived from DNA extracted from three Neanderthal bones found in the Vindiga Cave in Croatia; smaller amounts of sequence data were also obtained from three bones from other sites. Two of the Vindiga bones could be dated by carbon-dating of collagen and were found to be about 38,000 and 44,000 years old.

Deriving a genome sequence–representing the genetic code on all of an organism’s chromosomes–from such ancient DNA is a remarkable technological feat. The Neanderthal bones were not well preserved, and more than 95 percent of the DNA extracted from them came from bacteria and other organisms that had colonized the bone. The DNA itself was degraded into small fragments and had been chemically modified in many places.

Carl Zimmer at Discover:

Ideas about our own kinship to Neanderthals have swung dramatically over the years. For many decades after their initial discovery, paleoanthropologists only found Neanderthal bones in Europe. Many researchers decided, like Schaafhausen, that Neanderthals were the ancestors of living Europeans. But they were also part of a much larger lineage of humans that spanned the Old World. Their peculiar features, like the heavy brow, were just a local variation. Over the past million years, the linked populations of humans in Africa, Europe, and Asia all evolved together into modern humans.

In the 1980s, a different view emerged. All living humans could trace their ancestry to a small population in Africa perhaps 150,000 years ago. They spread out across all of Africa, and then moved into Europe and Asia about 50,000 years ago. If they encountered other hominins in their way, such as the Neanderthals, they did not interbreed. Eventually, only our own species, the African-originating Homo sapiens, was left.

The evidence scientists marshalled for this “Out of Africa” view of human evolution took the form of both fossils and genes. The stocky, heavy browed Neanderthals did not evolve smoothly into slender, flat-faced Europeans, scientists argued. Instead, modern-looking Europeans just popped up about 40,000 years ago. What’s more, they argued, those modern-looking Europeans resembled older humans from Africa.

At the time, geneticists were learning how to sequence genes and compare different versions of the same genes among individuals. Some of the first genes that scientists sequenced were in the mitochondria, little blobs in our cells that generate energy. Mitochondria also carry DNA, and they have the added attraction of being passed down only from mothers to their children. The mitochondrial DNA of Europeans was much closer to that of Asians than either was to Africans. What’s more, the diversity of mitochondrial DNA among Africans was huge compared to the rest of the world. These sorts of results suggested that living humans shared a common ancestor in Africa. And the amount of mutations in each branch of the human tree suggested that that common ancestor lived about 150,000 years ago, not a million years ago.

Over the past 30 years, scientists have battled over which of these views–multi-regionalism versus Out of Africa–is right. And along the way, they’ve also developed more complex variations that fall in between the two extremes. Some have suggested, for example, that modern humans emerged out of Africa in a series of waves. Some have suggested that modern humans and other hominins interbred, leaving us with a mix of genetic material.

Reconstructing this history is important for many reasons, not the least of which is that scientists can use it to plot out the rise of the human mind. If Neanderthals could make their own jewelry 50,000 years ago, for example, they might well have had brains capable of recognizing themselves as both individuals and as members of a group. Humans are the only living animals with that package of cognitive skills. Perhaps that package had already evolved in the common ancestor of humans and Neanderthals. Or perhaps it evolved independently in both lineages.

Razib Khan at Discover

John Hawks:

If you had to sum up in a few words, what does this mean for paleoanthropology?

These scientists have given an immense gift to humanity.

I’ve been comparing it to the pictures of Earth that came back from Apollo 8. The Neandertal genome gives us a picture of ourselves, from the outside looking in. We can see, and now learn about, the essential genetic changes that make us human — the things that made our emergence as a global species possible.

And in doing so, they’ve taken a forgotten group of people — whom even most anthropologists had given up on — and they’ve restored them to their rightful place in our heritage.

Beyond that, they’ve taken all of their data and deposited it in a public database, so that the rest of us can inspect them, replicate results, and learn new things from them. High school kids can download this stuff and do science fair projects on Neandertal genomics.

This is what anthropology ought to be.

What did they sequence?

The Max Planck group obtained most of their genomic sequence from three specimens from Vindija — Vi33.16, Vi33.25, and Vi33.26. These are all postcranial fragments with minimal anatomical information. Green and colleagues were able to establish that the three bones represent different women, and that Vi33.16 and Vi33.26 may represent maternal relatives.

From these skeletons they got 5.3 billion bases of sequence. All this from an amount of bone powder about equal in mass to an aspirin pill.

Amazing. I mean, I know the folks at Max Planck are reading this. It’s inspiring to see what they’ve been able to do. These are three pieces of barely diagnostic hominin bone, and they’ve obtained literally hundreds of times more information than we have ever gotten from the fossil record of Neandertals.

I’ll describe the analyses of genetic similarity with humans in more detail below. As a brief summary, of those positions where the human genome differs from chimpanzees, Neandertals have the chimpanzee version around 12.7 percent of the time — meaning that across the genome, a Neandertal and a human will share a genetic ancestor an average of around 800,000 years ago. This is a couple hundred thousand years higher than the same number if we compare two humans to each other. The higher age of genetic common ancestors reflects partial isolation between the Neandertal population and the African populations that gave rise to most of our current genetic variation.

The team were able to identify 111 candidate duplications, almost all of which have some evidence of copy number variation in humans or other primates. They tentatively show that Neandertals have a bit more copy number variation than present-day humans, and identify a few loci with substantially higher copy numbers in one group or the other.

Jules Crittenden:

I recall in my own nathro days in college that was more of an open question … species or subpopulation, than it seems to have been more recently. Hawks holds puts us biologically in the same column. They are Homo sapiens, though he allows some paleontologists will disagree, based on morphological distinctions. But that, he notes, would make all non-Africans interspecies hybrids.

I kind of like the interspecies hybrid idea. It’s got an edgy sound to it. But Hawks also suggests that the 1 to 4 percent is only the currently discernable proportion. It could be higher, and sub-Saharan Africans could have some as yet undetected because it shows no variation from what we all have, part of the baseline, so to speak. I especially like his observation that genetically speaking, a minimum 1 percent Neanderthal genes in 5 billion people is the equivalent of 50 million Neanderthals “yawping from the rooftops,” which he suggests is not a bad genetic success rate for the Neanderthals. They weren’t evolutionary failures after all. Propagation of genetic material being the ultimate point of all our endeavors. Absent as yet is any indication of what discernable traits we might have inherited from them.

A lot of mind-bending aspects to this news. I’m good with it. I accepted being descended from slithering primodial bog sludge a long time ago. I mean one of those moments when you consider that, yeah, we have some successful molluscs and pretty nasty lizards to thank that we’re here, enjoying the good stuff. They lived and breathed, or whatever, just like us. More recent descent from thickset hairy low brows, that’s neither such a big surprise nor anything to sneer at. A delight that the mysterious, exotic strain known only through science and maybe vague mythic memory turns out, as Hawks says, not to have been a complete deadend. They live in us.

In fact, I was deliriously happy driving home tonight, thinking about all that grand and terrible prehistory. It’s not like anything’s changed with this news. It’s like a few years ago, when I was able to identify a location I had wondered about. The forgotten village in Kent where my father’s people … our Y chromosome … dwelt for centuries, down to the pub where they lived and poured beer, and the names of about 10 generations of them. It was like figuring out there are Saxons, Vikings, Celts and Picts up the line, and the ones who built Stonehenge, learning a little about who and what those direct barbarian forebears were. None of this past is that far in the past, after all. It all happened yesterday. And whenyou start to zero in on them, it’s a homecoming feeling: “There you are. I knew it.”

Ronald Bailey at Reason:

I will mention that my 23andMe genotype scan indicates my maternal haplogroup is U5a2a which arose some 40,000 years ago and were among the first homo sapiens colonizers of ice age Europe.

If you’re interested, go here for my column on what rights Neanderthals might claim should we ever succeed in using cloning technologies to bring them back.

Leave a comment

Filed under History, Science

New Atheists: The New Coke Of Intellectual Combatants?

David Bentley Hart in First Things:

I think I am very close to concluding that this whole “New Atheism” movement is only a passing fad—not the cultural watershed its purveyors imagine it to be, but simply one of those occasional and inexplicable marketing vogues that inevitably go the way of pet rocks, disco, prime-time soaps, and The Bridges of Madison County. This is not because I necessarily think the current “marketplace of ideas” particularly good at sorting out wise arguments from foolish. But the latest trend in à la mode godlessness, it seems to me, has by now proved itself to be so intellectually and morally trivial that it has to be classified as just a form of light entertainment, and popular culture always tires of its diversions sooner or later and moves on to other, equally ephemeral toys.

[…]

The principal source of my melancholy, however, is my firm conviction that today’s most obstreperous infidels lack the courage, moral intelligence, and thoughtfulness of their forefathers in faithlessness. What I find chiefly offensive about them is not that they are skeptics or atheists; rather, it is that they are not skeptics at all and have purchased their atheism cheaply, with the sort of boorish arrogance that might make a man believe himself a great strategist because his tanks overwhelmed a town of unarmed peasants, or a great lover because he can afford the price of admission to a brothel. So long as one can choose one’s conquests in advance, taking always the paths of least resistance, one can always imagine oneself a Napoleon or a Casanova (and even better: the one without a Waterloo, the other without the clap).

But how long can any soul delight in victories of that sort? And how long should we waste our time with the sheer banality of the New Atheists—with, that is, their childishly Manichean view of history, their lack of any tragic sense, their indifference to the cultural contingency of moral “truths,” their wanton incuriosity, their vague babblings about “religion” in the abstract, and their absurd optimism regarding the future they long for?

I am not—honestly, I am not—simply being dismissive here. The utter inconsequentiality of contemporary atheism is a social and spiritual catastrophe. Something splendid and irreplaceable has taken leave of our culture—some great moral and intellectual capacity that once inspired the more heroic expressions of belief and unbelief alike. Skepticism and atheism are, at least in their highest manifestations, noble, precious, and even necessary traditions, and even the most fervent of believers should acknowledge that both are often inspired by a profound moral alarm at evil and suffering, at the corruption of religious institutions, at psychological terrorism, at injustices either prompted or abetted by religious doctrines, at arid dogmatisms and inane fideisms, and at worldly power wielded in the name of otherworldly goods. In the best kinds
of unbelief, there is something of the moral grandeur of the prophets—a deep and admirable abhorrence of those vicious idolatries that enslave minds and justify our worst cruelties.

But a true skeptic is also someone who understands that an attitude of critical suspicion is quite different from the glib abandonment of one vision of absolute truth for another—say, fundamentalist Christianity for fundamentalist materialism or something vaguely and inaccurately called “humanism.” Hume, for instance, never traded one dogmatism for another, or one facile certitude for another. He understood how radical were the implications of the skepticism he recommended, and how they struck at the foundations not only of unthinking faith, but of proud rationality as well.

A truly profound atheist is someone who has taken the trouble to understand, in its most sophisticated forms, the belief he or she rejects, and to understand the consequences of that rejection. Among the New Atheists, there is no one of whom this can be said, and the movement as a whole has yet to produce a single book or essay that is anything more than an insipidly doctrinaire and appallingly ignorant diatribe.

If that seems a harsh judgment, I can only say that I have arrived at it honestly. In the course of writing a book published just this last year, I dutifully acquainted myself not only with all the recent New Atheist bestsellers, but also with a whole constellation of other texts in the same line, and I did so, I believe, without prejudice. No matter how patiently I read, though, and no matter how Herculean the efforts I made at sympathy, I simply could not find many intellectually serious arguments in their pages, and I came finally to believe that their authors were not much concerned to make any.

What I did take away from the experience was a fairly good sense of the real scope and ambition of the New Atheist project. I came to realize that the whole enterprise, when purged of its hugely preponderant alloy of sanctimonious bombast, is reducible to only a handful of arguments, most of which consist in simple category mistakes or the kind of historical oversimplifications that are either demonstrably false or irrelevantly true. And arguments of that sort are easily dismissed, if one is hardy enough to go on pointing out the obvious with sufficient indefatigability.

The only points at which the New Atheists seem to invite any serious intellectual engagement are those at which they try to demonstrate that all the traditional metaphysical arguments for the reality of God fail. At least, this should be their most powerful line of critique, and no doubt would be if any of them could demonstrate a respectable understanding of those traditional metaphysical arguments, as well as an ability to refute them. Curiously enough, however, not even the trained philosophers among them seem able to do this. And this is, as far as I can tell, as much a result of indolence as of philosophical ineptitude. The insouciance with which, for instance, Daniel Dennett tends to approach such matters is so torpid as to verge on the reptilian. He scarcely bothers even to get the traditional “theistic” arguments right, and the few ripostes he ventures are often the ones most easily discredited.

As a rule, the New Atheists’ concept of God is simply that of some very immense and powerful being among other beings, who serves as the first cause of all other things only in the sense that he is prior to and larger than all other causes. That is, the New Atheists are concerned with the sort of God believed in by seventeenth- and eighteenth-century Deists. Dawkins, for instance, even cites with approval the old village atheist’s cavil that omniscience and omnipotence are incompatible because a God who infallibly foresaw the future would be impotent to change it—as though Christians, Jews, Muslims, Hindus, Sikhs, and so forth understood God simply as some temporal being of interminable duration who knows things as we do, as external objects of cognition, mediated to him under the conditions of space and time.

Thus, the New Atheists’ favorite argument turns out to be just a version of the old argument from infinite regress: If you try to explain the existence of the universe by asserting God created it, you have solved nothing because then you are obliged to say where God came from, and so on ad infinitum, one turtle after another, all the way down. This is a line of attack with a long pedigree, admittedly. John Stuart Mill learned it at his father’s knee. Bertrand Russell thought it more than sufficient to put paid to the whole God issue once and for all. Dennett thinks it as unanswerable today as when Hume first advanced it—although, as a professed admirer of Hume, he might have noticed that Hume quite explicitly treats it as a formidable objection only to the God of Deism, not to the God of “traditional metaphysics.” In truth, though, there could hardly be a weaker argument. To use a feeble analogy, it is rather like asserting that it is inadequate to say that light is the cause of illumination because one is then obliged to say what it is that illuminates the light, and so on ad infinitum.

Ross Douthat:

Given the durability and predictability of the arguments involved, and the amount of ink spilled on them over the years (and centuries, and millennia), it’s hard to come up with something interesting to say on the question of Christianity versus the “new” atheists. But the Orthodox theologian David Bentley Hart has now managed the trick twice: Once in his slim book “Atheist Delusions: The Christian Revolution and Its Fashionable Enemies,” which came out last year, and now in a fine essay for the latest First Things. Here’s his concluding reflection — but do read the whole thing:

If I were to choose from among the New Atheists a single figure who to my mind epitomizes the spiritual chasm that separates Nietzsche’s unbelief from theirs, I think it would be the philosopher and essayist A.C. Grayling … Couched at one juncture among [his] various arguments (all of which are pretty poor), there is something resembling a cogent point. Among the defenses of Christianity an apologist might adduce, says Grayling, would be a purely aesthetic cultural argument: But for Christianity, there would be no Renaissance art—no Annunciations or Madonnas—and would we not all be much the poorer if that were so? But, in fact, no, counters Grayling; we might rather profit from a far greater number of canvasses devoted to the lovely mythical themes of classical antiquity, and only a macabre sensibility could fail to see that “an Aphrodite emerging from the Paphian foam is an infinitely more life-enhancing image than a Deposition from the Cross.” Here Grayling almost achieves a Nietzschean moment of moral clarity.

Ignoring that leaden and almost perfectly ductile phrase “life-enhancing,” I, too—red of blood and rude of health—would have to say I generally prefer the sight of nubile beauty to that of a murdered man’s shattered corpse. The question of whether Grayling might be accused of a certain deficiency of tragic sense can be deferred here. But perhaps he would have done well, in choosing this comparison, to have reflected on the sheer strangeness, and the significance, of the historical and cultural changes that made it possible in the first place for the death of a common man at the hands of a duly appointed legal authority to become the captivating center of an entire civilization’s moral and aesthetic contemplations—and for the deaths of all common men and women perhaps to be invested thereby with a gravity that the ancient order would never have accorded them.

Here, displayed with an altogether elegant incomprehensibility in Grayling’s casual juxtaposition of the sea-born goddess and the crucified God (who is a crucified man), one catches a glimpse of the enigma of the Christian event, which Nietzsche understood and Grayling does not: the lightning bolt that broke from the cloudless sky of pagan antiquity, the long revolution that overturned the hierarchies of heaven and earth alike. One does not have to believe any of it, of course—the Christian story, its moral claims, its metaphysical systems, and so forth. But anyone who chooses to lament that event should also be willing, first, to see this image of the God-man, broken at the foot of the cross, for what it is, in the full mystery of its historical contingency, spiritual pathos, and moral novelty: that tender agony of the soul that finds the glory of God in the most abject and defeated of human forms. Only if one has succeeded in doing this can it be of any significance if one still, then, elects to turn away.

Rod Dreher:

You really should read the whole thing, especially Hart’s conclusion. Essentially he respects Nietzsche’s atheism a very great deal, though obviously he opposes it, because Hart sees that Nietzsche understands precisely what repudiating Christianity means.

Kevin Drum:

So: do the New Atheists recycle old arguments? Of course they do. But that’s not because they’re illiterate, it’s because those arguments have never been convincingly answered. All the recondite language in the world doesn’t change that, either, because the paradoxes are inherent in the ideas themselves. In the end, the English language probably just isn’t up to the task of answering them, no matter how hard you try to twist it. To say that God is is best understood as an absolute plenitude of actuality doesn’t really advance the ball so much as it merely tries to hide it.

Later in the essay, perhaps recognizing that he’s exhausted the semantic possibilities here, Hart redirects his focus to the cultural impact of Christianity, suggesting that the New Atheists haven’t truly grappled with what a world without religion would be like. And perhaps they haven’t. But interior passions and social mores work both ways. Did Isaac Newton feel a deeper aesthetic connection with the infinite when he was inventing calculus or when he was absorbed in Christian mysticism? Who can say? Not me, surely, and not Hart either. Likewise, the question of whether Christianity has, on balance, been a force for moral good is only slightly more tractable. Does keeping the servants from stealing the silver really outweigh the depredations of the Crusades and the Inquisition?

But no matter how beguiling those questions are, surely the metaphysical one always comes first. To say merely that Christianity is comforting or practical — assuming you believe that — is hardly enough. You need to show that it’s true. And if you want to assert that something is true, the onus is on you to demonstrate it, not on the New Atheists to demonstrate conclusively that it isn’t. After all, in the end the only difference between Hart and Dawkins is that Hart believes in 1% of the world’s religions and Dawkins believes in 0% of them. It’s Dawkins’ job only to question that remaining 1%. It’s Hart’s job to answer him.

Andrew Sullivan:

Look: human nature being what it is, most religious people will be a dreadful example of the best version of faith you can find. Drum permits what Hitch’s book was: a grand guignol of anti-clerical, fish-barrel-shooting. It’s easy; it’s way fun; mockery of inarticulate believers has made my friend, Bill Maher, lotsa money. But it’s largely missing the real intellectual task by fighting a straw man, rather than a real and living and intelligent faith. Part of that is the fault of believers. We’ve done a lousy job of delineating a living faith for modernity.

UPDATE: Damon Linker at TNR

Kevin Drum

UPDATE #2: Sullivan responds to Drum

Drum responds to Sullivan

Sullivan responds to Drum

UPDATE #3: Kevin Drum

Joe Carter at First Things

Rod Dreher

UPDATE #4: Razib Khan at Secular Right on Carter

1 Comment

Filed under Religion