Tag Archives: Go Meta

Park Slope and The Rats of NIMBY

Elisabeth Rosenthal at NYT:

Park Slope, Brooklyn. Cape Cod, Mass. Berkeley, Calif. Three famously progressive places, right? The yin to the Tea Party yang. But just try putting a bike lane or some wind turbines in their lines of sight. And the karma can get very different.

Last week, two groups of New Yorkers who live “on or near” Prospect Park West, a prestigious address in Park Slope, filed a suit against the administration of Mayor Michael R. Bloomberg to remove a nine-month-old bike lane that has commandeered a lane previously used by cars.

In Massachusetts, the formidable opponents of Cape Wind, a proposed offshore wind farm in Nantucket Sound, include members of the Kennedy family, whose compound looks out over the body of water. In Berkeley last year, the objections of store owners and residents forced the city to shelve plans for a full bus rapid transit system (B.R.T.), a form of green mass transit in which lanes that formerly served cars are blocked off and usurped by high-capacity buses that resemble above-ground subways.

Critics in New York contend the new Prospect Park bike lane is badly designed, endangering pedestrians and snarling traffic. Cape Wind opponents argue the turbines will defile a pristine body of water. And in Berkeley, store owners worried that reduced traffic flow and parking could hurt their business.

But some supporters of high-profile green projects like these say the problem is just plain old Nimbyism — the opposition by residents to a local development of the sort that they otherwise tend to support.

Ryan Avent:

The Times piece delves into the psychology of this kind of neighborhood opposition, but what it doesn’t say is that as annoying as this is, it has a far smaller impact on net emissions than the far more common anti-development strain of NIMBYism. Bike lanes make New York City a teeny bit greener. But New York is already much, much greener than most American cities, thanks to its dense development pattern and extensive transit network. Net emissions fall a lot more when someone from Houston moves to New York than when someone from New York starts biking.

Happily, lots of people would LOVE to move to New York. This is one huge benefit we don’t need to subsidize to realize. Unhappily, the benefit is nonetheless out of reach because of the huge obstacles to new, dense construction in New York. New York can’t accommodate more people unless it builds more homes, and it can’t build more homes, for the most part, without building taller buildings. And New Yorkers fight new, tall buildings tooth and nail. They fight them on aesthetic grounds, and because they’re worried about parking and traffic, and because they’re worried about their view, and because they just think there’s enough building in New York already, thank you. And many do this while heaping massive scorn on oil executives and the Republican Party over their backward and destructive views on global warming.

Of course, the obstruction of development is offensive for lots of reasons: it makes housing and access to employment unaffordable, it reduces urban job and revenue growth, it tramples on private property rights, and so on. But the environmental hypocrisy is galling, and it’s not limited to New York. My old neighborhood, Brookland, voted overwhelmingly for Obama (about 90-10, as I recall). Many of the locals are vocally supportive of broad, lefty environmental goals. And yet, when a local businessman wants to redevelop his transit-adjacent land into a denser, mixed-use structure, the negative response is overwhelming, and residents fall over themselves to abuse local rules in order to prevent the redevelopment from happening.

This project would bring new retail with it, which would enable more local residents to walk to a retail destination. It would bring new residents, and those residents would be vastly more likely to walk or take transit to destinations than those living farther from Metro. Forget the economic benefits to the city, the people occupying the new housing units would have carbon footprints dramatically below the national average. But this basically does not matter to the NIMBYs however much they profess to care about the environment.

To the extent that public opinion matters and can be shaped, I think it would be a huge boon for humanity for attitudes toward NIMBYism to turn decidedly negative. People should be ashamed of this behavior, which is both selfish and extravagantly dismissive of property rights.

Kevin Drum:

Earlier today, I linked to a Ryan Avent post complaining that although dense cities like New York are much greener than towns and suburbs, his lefty, environmentally-aware neighbors fight against new high-density developments in the city anyway. A little later, I had an email exchange with HW, a lefty, environmentally-aware New Yorker who thinks Ryan has it all wrong. Here’s the exchange:

HW: It is true that people living in NY have much much lower carbon footprints than those who live in lower density areas. It’s also true that it is a highly desirable place to live. So wouldn’t the way to accomplish more people living in high density areas like NY be to replicate it elsewhere? Or should we insist on cramming more people into NY against NYers’ will and make it a less desirable place to live?

Wouldn’t it be better for 8 million people to live in NY and have it serve as a beacon for a great, lower carbon footprint lifestyle? If you cram an extra million people in, sure, you lower their carbon footprints, but you may also make high density urban living far less attractive and less likely to be replicated around the country.

Avent mentions problems with parking and traffic as a throw-away, but I can tell you, the 4-5-6 running up from midtown to the Upper East Side is quite literally crammed wall-to-wall with people every morning. Parking is unlikely to be an option for anyone unwilling to spend several hundred dollars a month. And yes, another ten skyscrapers will result in the city becoming a darker and more depressing place. Not to mention the fact that the last ten high rises that went up on the Upper East Side were creatures of the housing bubble, resulting in massive losses and lots of empty units.

So would it be so terrible if we built up the downtown areas of Jersey City, White Plains and Stamford instead?

My reply: Well, that’s the funny thing. Building new high-density areas is the obvious answer here, but no one ever does it. Why? I assume it’s because it’s next to impossible to get people to move to new high-density developments. You get all the bad aspects of density without any of the good aspects of living in a big, well-established city.

It’s a conundrum. We could use more well established cities, but no one wants to live in the intermediate stages that it takes to build one. And of course, in well-established smaller towns and cities, the residents fight like crazed weasels to prevent the kind of development that they associate with crime and gangs.

I don’t really know what the answer is.

HW again: I’m not sure that’s entirely true. What about all the downtown redevelopment projects that have happened around the country? Or the urban centers that sprout up around the core of big cities like NY. Next time you are in NY, look across the East River and take a gander at Long Island City. It’s as close to midtown as the Upper East Side, easy to build there, far less expensive, and just as dense. And every single one of those luxury high rises went up in the past 12 years; it’s literally a skyline that didn’t exist 12 years ago. Jersey City is a similar story, both for residential and financial (every big bank has moved their IT back office out there). Or look at the gentrification of Brooklyn!

So why obsess on cramming a couple hundred thousand more people on the island of Manhattan, which will push it past the bursting point? It’s just not a smart premise. In fact, I’ll go further: it bears no relationship to reality. No one would stop a luxury high rise in any of the other four boroughs or right across the river in NJ and it’s just as dense and low-carbon to live in those spots. It’s just that Ryan Avent doesn’t WANT to live in those spots. He wants to live in a cheaper high rise in Manhattan (which, by the way, has seen tons of them go up already in the past decade — in the Financial District, Hell’s Kitchen, the Upper East Side). Avent should ride the 4/5/6 at 8 am every morning for a week, come back, and tell us if his article makes any sense. As a 4th generation NYer, I don’t think it even begins to.

I don’t really have a dog in this fight since I’ve lived in the leafy suburbs of Orange County all my life. But I thought this was an instructive response that was worth sharing. Back to you, Ryan.

Avent responds to the e-mail exchange:

I’m just pointing out the obvious here — many more people would like to live in Manhattan, it would be good economically and environmentally if they did, and it’s bad that local neighborhood groups are preventing them from doing so because they’re worried about their view. Further, my guess is that even without a relaxation in development rules Manhattan will cram in a couple hundred thousand more people, and demand will continue to rise; somehow, Manhattan will manage not to burst. Though it might eventually be swamped, if city-dwelling NIMBYs continue to make Houston exurbs ever more affordable relative to walkable density.

The transportation problem can be solved, in part, by better transportation policy. It is a crime that the subways are crammed while drivers use the streets of Manhattan for free, but that’s a policy failure, not a density failure. It’s also worth noting that heights fall off sharply as one moves away from the central business districts of Lower and Midtown Manhattan. If developers could build taller in surrounding neighborhoods and add residential capacity there, then more Manhattan workers could live within easy walking distance of their offices, and fewer would need to commute in by train.

Finally, let me point out that this is not about what I want. I’m not planning a move to New York, and I’m not remotely suggesting that the government should somehow mandate or encourage high-density construction. I’m simply saying that it should be easier for builders to meet market demand. It should be easier for builders to meet market demand in Manhattan, and Brooklyn, and Nassau County, and Washington, and downtown Denver, and so on. People clearly want to live in these places, and it would be really good for our economy and our environment if they were able to do so. And I find it very unfortunate that residents deriving great benefits from the amenities of their dense, urban neighborhoods are determined to deny those benefits to others.

Matthew Yglesias:

I don’t want to say too much about the debate over increased density in Manhattan because, again, ebook proposal. But one reality check on this whole subject is to note that the population of Manhattan 100 years ago at 2,331,542 people. It then hit a low of 1,428,285 in 1980 and has since then risen back up to 1,629,054.

Back in 1910 there were only 92,228,496 people in the United States. Since that time, the population of the country has more than tripled to 308,745,538. And if you look at Manhattan real estate prices, it’s hardly as if population decline in Manhattan has been driven by a lack of demand for Manhattan housing. Back around 1981 when I was born, things were different. The population of the island was shrinking and large swathes of Manhattan were cheap places to live thanks to the large existing housing stock and the high crime.

Karl Smith at Modeled Behavior:

Many years ago I gave a talk entitled, Green Manhattan, where I made the case that Metropolis was the greenest place in America.

Naturally, I got a lot of funny looks but the line that seemed to win a few converts was this: the best way to protect the environment is by keeping people out of it.

I admit I took a few liberties in the talk, not discussing how agriculture would be performed and supported, for example. Nonetheless, I think this framing breaks the intuition that green is about living with nature rather than letting nature live on its on.

Megan McArdle:

New York hasn’t actually been growing steadily; it’s been rebounding to the population of roughly 8 million that it enjoyed in 1950-70 before the population plunged in the 1970s.  It’s really only in the last ten years that the population has grown much beyond where it was in the 1970.

This matters because I think you can argue pretty plausibly that New York’s infrastructure has put some limits on the city’s growth–that by 1970 the city had about grown up to those limits, and that we can push beyond them only slowly.  The rail and bus lines that sustain the business district are pretty much saturated, and the roads and bridges can’t really carry many more cars at peak times.  Adding busses could conceivably help you handle some of the overflow, but unless those busses actually replace cars, they’ll also make traffic slower.
Unless you plan to fill the city entirely with retirees who don’t need to go to work, there’s actually not that much more room to build up New York–you could put the people there, but they wouldn’t be able to move.  And even the retirees would require goods and services that choke already very congested entry and exit points.  There has been peripatetic talk about switching all deliveries to night, but that would disturb the sleep of low-floor apartment dwellers, and be fantastically expensive, forcing every business to add a night shift.
At the very least, the current city dwellers are right that adding more people would add a lot more costs to them–crammed train cars, more expensive goods.  In New York, much more than in other places, the competition for scarce resources like commuting space is extremely stark.
That doesn’t mean it is impossible to add a lot more people to New York.  But doing so requires not just changing zoning rules–as far as I know, there’s already quite a lot of real estate in the outer boroughs that could accommodate more people, but it’s not close to transportation, so it’s not economically viable.  If you want to add a lot more housing units, you also need to add considerable complimentary infrastructure, starting with upgrading the rest of the subway’s Depression-era switching systems (complicated and VERY expensive because unlike other systems, New York’s trains run 24/7).  And ultimately, it’s going to mean adding more subway lines, because short of building double-decker streets, there’s no other way for enough people to move.
Those lines don’t have to go to the central business district; there’s already been some success developing alternate hubs in Queens and Brooklyn.  But they do have to go from residential neighborhoods to somewhere that people work, and they have to add actual extra carrying capacity to the system–line extensions do no good if the trains are already packed to bursting over the high-traffic areas of the route.
Advertisements

Leave a comment

Filed under Go Meta, Infrastructure

It Is Ezra Klein Week Here At Around The Sphere

Ezra Klein:

There’s lots of interesting stuff in Ed Glaeser’s new book, “The Triumph of the City.” One of Glaeser’s themes, for instance, is the apparent paradox of cities becoming more expensive and more crowded even as the cost of communicating over great distances has fallen dramatically. New York is a good example of this, but Silicon Valley is a better one

[…]

The overarching theme of Glaeser’s book is that cities make us smarter, more productive and more innovative. To put it plainly, they make us richer. And the evidence in favor of this point is very, very strong. But it would of course be political suicide for President Obama to say that part of winning the future is ending the raft of subsidies we devote to sustaining rural living. And the U.S. Senate is literally set up to ensure that such a policy never becomes politically plausible.

Klein again:

Yesterday afternoon, I got an e-mail from a “usda.gov” address. “Secretary Vilsack read your blog post ‘Why we still need cities’ over the weekend, and he has some thoughts and reflections, particularly about the importance of rural America,” it said. A call was set for a little later in the day. I think it’s safe to say Vilsack didn’t like the post. A lightly edited transcript of our discussion about rural America, subsidies and values follows.

Ezra Klein: Let’s talk about the post.

Tom Vilsack: I took it as a slam on rural America. Rural America is a unique and interesting place that I don’t think a lot of folks fully appreciate and understand. They don’t understand that that while it represents 16 percent of America’s population, 44 percent of the military comes from rural America. It’s the source of our food, fiber and feed, and 88 percent of our renewable water resources. One of every 12 jobs in the American economy is connected in some way to what happens in rural America. It’s one of the few parts of our economy that still has a trade surplus. And sometimes people don’t realize that 90 percent of the persistent poverty counties are located in rural America.

EK: Let me stop you there for a moment. Are 90 percent of the people in persistent poverty in rural America? Or just 90 percent of the counties?

TV: Well, I’m sure that more people live in cities who are below the poverty level. In terms of abject poverty and significant poverty, there’s a lot of it in rural America.

The other thing is that people don’t understand is how difficult farming is. There are really three different kinds of farmers. Of the 2.1 million people who counted as farmers, about 1.3 million of them live in a farmstead in rural America. They don’t really make any money from their operation. Then there are 600,000 people who, if you ask them what they do for a living, they’re farmers. They produce more than $10,000 but less than $250,000 in sales. Those folks are good people, they populate rural communities and support good schools and serve important functions. And those are the folks for whom I’m trying to figure out how to diversify income opportunities, help them spread out into renewable fuel sources. And then the balance of farmers, roughly 200,000 to 300,000, are commercial operations, and they do pretty well, particularly when commodity prices are high. But they have a tremendous amount of capital at risk. And they’re aging at a rapid rate, with 37 percent over 65. Who’s going to replace those folks?

EK: You keep saying that rural Americans are good and decent people, that they work hard and participate in their communities. But no one is questioning that. The issue is that people who live in cities are also good people. People who live in exurbs work hard and mow their lawns. So what does the character of rural America have to do with subsidies for rural America?

TV: It is an argument. There is a value system that’s important to support. If there’s not economic opportunity, we can’t utilize the resources of rural America. I think it’s a complicated discussion and it does start with the fact that these are good, hardworking people who feel underappreciated. When you spend 6 or 7 percent of your paycheck for groceries and people in other countries spend 20 percent, that’s partly because of these farmers.

More Klein here and here

Will Wilkinson at DiA at The Economist:

IN THIS chat with Ezra Klein, Tom Vilsack, the secretary of agriculture, offers a pandering defence of agricultural subsidies so thoroughly bereft of substance I began to fear that Mr Vilsack would be sucked into the vacuum of his mouth and disappear.When Mr Klein first raises the subject of subsidies for sugar and corn, Mr Vilsack admirably says, “I admit and acknowledge that over a period of time, those subsidies need to be phased out.” But not yet! Vilsack immediately thereafter scrambles to defend the injurious practice. Ethanol subsidies help to wean us off foreign fuels and dampen price volatility when there is no peace is the Middle East, Mr Vilsack contends. Anyway, he continues, undoing the economic dislocation created by decades of corporate welfare for the likes of ADM and Cargill will create economic dislocation. Neither of these points is entirely lacking in merit, but they at best argue for phasing out subsidies slowly starting now.

Mr Vilsack should have stopped here, since this is as strong as his case is ever going to be, but instead he goes on to argue that these subsidies sustain rural culture, which is a patriotic culture that honours and encourages vital military service:

[S]mall-town folks in rural America don’t feel appreciated. They feel they do a great service for America. They send their children to the military not just because it’s an opportunity, but because they have a value system from the farm: They have to give something back to the land that sustains them.

Mr Klein follows up sanely:

It sounds to me like the policy you’re suggesting here is to subsidize the military by subsidizing rural America. Why not just increase military pay? Do you believe that if there was a substantial shift in geography over the next 15 years, that we wouldn’t be able to furnish a military?

To which Mr Vilsack says:

I think we would have fewer people. There’s a value system there. Service is important for rural folks. Country is important, patriotism is important. And people grow up with that. I wish I could give you all the examples over the last two years as secretary of agriculture, where I hear people in rural America constantly being criticized, without any expression of appreciation for what they do do.

In the end, Mr Vilsack’s argument comes down to the notion that the people of rural America feel that they have lost social status, and that subsidies amount to a form of just compensation for this injury. I don’t think Mr Vilsack really believes that in the absence of welfare for farmers, the armed services would be hard-pressed to find young men and women willing to make war for the American state. He’s using willingness-to-volunteer as proof of superior patriotism, and superior patriotism is the one claim to status left to those who have no other.

Ryan Avent at Free Exchange at The Economist:

I’ll add a few comments. First, it may be that the economists who understand the economic virtues of city life aren’t doing a sufficiently good job explaining that it’s not the people in cities that contribute the extra economic punch; it’s the cities or, more exactly, the interactions between the people cities facilitate. It’s fine to love the peace of rural life. Just understand that the price of peace is isolation, which reduces productivity.

Second, the idea that economically virtuous actors deserve to be rewarded not simply with economic success but with subsidies is remarkably common in America (and elsewhere) and is not by any means a characteristic limited to rural people. I also find it strange how upset Mr Vilsack is by the fact that he “ha[s] a hard time finding journalists who will speak for them”. Agricultural interests are represented by some of the most effective lobbyists in the country, but their feelings are hurt by the fact that journalists aren’t saying how great they are? This reminds me of the argument that business leaders aren’t investing because they’re put off by the president’s populist rhetoric. When did people become so sensitive? When did hurt feelings become a sufficient justification for untold government subsidies?

Finally, what Mr Klein doesn’t mention is that rural voters are purchasing respect or dignity at the price of livelihoods in much poorer places. If Americans truly cared for the values of an urban life and truly wished to address rural poverty, they’d get rid of agricultural policies that primarily punish farmers in developing economies.

Andrew Sullivan

Arnold Kling:

Ezra Klein sounds like my clone when arguing with the Secretary of Agriculture.

James Joyner:

Essentially, Vilsack justifies subsiding farmers on the basis that rural America is the storehouse of our values, for which he has no evidence. And he’s befuddled when confronted with someone who doesn’t take his homilies as obvious facts.

Nobody argues that America’s farmers aren’t a vital part of our economy or denies that rural areas provide a disproportionate number of our soldiers. But the notion that country folks are somehow better people or even better Americans has no basis in reality.

Jonathan Chait at TNR:

Why is it so common to praise the character of rural America? Part of it is doubtless that rural life represents the past, and we think of the past as a simpler and more honest time. But surely another element is simply that rural America is overwhelmingly white and Protestant. And completely aside from the policy ramifications, the deep-seated veneration of rural America reflects, at bottom, a prejudice few would be willing to openly spell out.

Leave a comment

Filed under Economics, Food, Go Meta, New Media

All Your Best Blog Posts On That Economic Policy Institute’s Study

Ezra Klein:

“Republicans say that public-sector employees have become a privileged class that overburdened taxpayers,” write Karen Tumulty and Brady Dennis. The question, of course, is whether it’s true. Consider this analysis the Economic Policy Institute conducted comparing total compensation — that is to say, wages and health-care benefits and pensions — among public and private workers in Wisconsin. To get an apples-to-apples comparison, the study’s author controlled for experience, organizational size, gender, race, ethnicity, citizenship and disability, and then sorted the results by education

[…]

If you prefer it in non-graph form: “Wisconsin public-sector workers face an annual compensation penalty of 11%. Adjusting for the slightly fewer hours worked per week on average, these public workers still face a compensation penalty of 5% for choosing to work in the public sector.”

Jim Manzi at The American Scene:

Klein links to an executive summary to support his claim, but reading the actual paper by Jeffrey H. Keefe is instructive. Keefe took a representative sample of Wisconsin workers, and built a regression model that relates “fundamental personal characteristics and labor market skills” to compensation, and then compared public to private sector employees, after “controlling” for these factors. As far as I can see, the factors adjusted for were: years of education; years of experience; gender; race; ethnicity; disability; size of organization where the employee works; and, hours worked per year. Stripped of jargon, what Keefe asserts is that, on average, any two individuals with identical scores on each of these listed characteristics “should” be paid the same amount.

But consider Bob and Joe, two hypothetical non-disabled white males, each of whom went to work at Kohl’s Wisconsin headquarters in the summer of 2000, immediately after graduating from the University of Wisconsin. They have both remained there ever since, and each works about 50 hours per week. Bob makes $65,000 per year, and Joe makes $62,000 per year. Could you conclude that Joe is undercompensated versus Bob? Do you have enough information to know the “fundamental personal characteristics and labor market skills” of each to that degree of precision? Suppose I told you that Bob is an accountant, and Joe is a merchandise buyer.

Even if Bob and Joe are illustrative stand-ins for large groups of employees for whom idiosyncratic differences should average out, if there are systematic differences in the market realities of the skills, talents, work orientation and the like demanded by accountants as compared to buyers, then I can’t assert that either group is underpaid or overpaid because the average salary is 5% different between these two groups.

And this hypothetical example considers people with a degree from the same school working in the same industry at the same company in the same town, just in different job classifications. Keefe is considering almost any full-time employee in Wisconsin with the identical years of education, race, gender, etc. as providing labor of equivalent market value, whether they are theoretical physicists, police officers, retail store managers, accountants, salespeople, or anything else. Whether they work in Milwaukee, Madison, or a small town with a much lower cost of living. Whether their job is high-stress or low-stress. Whether they face a constant, realistic risk of being laid off any given year, or close to lifetime employment. Whether their years of education for the job are in molecular biology, or the sociology of dance. Whether they do unpredictable shift work in a factory, or 9 – 5 desk work in an office with the option to telecommute one day per week.

Keefe claims – without adjusting for an all-but infinite number of such relevant potential differences between the weight-average public sector worker and the weight-average private sector worker – that his analysis is precise enough to ascribe a 5% difference in compensation to a public sector compensation “penalty.”

And his use of the statistical tests that he claims show that the total public-private compensation gap is “statistically significant” are worse than useless; they are misleading. The whole question – as is obvious even to untrained observers – is whether or not there are material systematic differences between the public and private employee that are not captured by the list of coefficients in his regression model. His statistical tests simply assume that there are not.

I don’t know if Wisconsin’s public employees are underpaid, overpaid, or paid just right. But this study sure doesn’t answer the question.

Jason Richwine at Heritage:

Manzi is referring to “the human capital model,” which holds that workers are paid according to their skills and personal characteristics, like education and experience. Most scholars—including Andrew, myself, and Heritage’s James Sherk—use it to compare the wages of the public and private sectors. If the public sector still earns more than the private after controlling for a variety of factors, then it is said to be “overpaid” in wages. But because we cannot control for everything, Manzi is saying, the technique is not very useful.

His critique is reasonable enough, but overwrought. The human capital model has been around for three decades, and it is unlikely that economists have failed to uncover important variables that would drastically change its results. Nevertheless, there are other techniques that address most of Manzi’s concerns. An upcoming Heritage Foundation report uses a “fixed effects” approach, which follows the same people over time as they switch between the private and federal sectors. By looking at how the same person’s wage changes when he moves between sectors, a lot of unobservable traits—intelligence, extroversion, etc.—are accounted for.

In order to capture fringe benefits as well as wages, economists have also used quit rates and job queues. If public workers quit less often than private workers, we can infer (with some qualifications, of course) that there are not better options available to them. Similarly, if many more applicants apply for government jobs than there are positions—creating a “queue”—then we know that government jobs are highly desirable. Of course no methodology is perfect, but the scholarly literature can tell us a lot about pay comparisons. Andrew and I discussed this work in detail in a recent Weekly Standard article.

John Sides:

From one perspective, sure, I agree that a statistical analysis of the sort described above based on observational data can never be a true direct comparison. (Not to mention the difficulty of classifying people like me who work in the quasi-public sector.) But if you take things from the other direction, this sort of study can be valuable.

What do I mean by “the other direction,” you might ask? I mean, suppose you start, as people do, with raw numbers: Salary plus benefits = X% of the state budget. The state has Y number of employees. Average income of all Wisconsinites is Z. Then you start adjusting for hours worked, ages of the employees, etc etc, and . . . you end up with Keefe’s analysis.

My point is, people are going to make some comparisons. Comparisons aren’t so dumb as long as you realize their limitations. And once you start to compare, it makes sense to try to compare comparable cases. Taking Manzi’s criticism too strongly would leave us in the position of allowing raw numbers, and allowing pure unblemished randomized experiments, but nothing in between.

In summary:

1. Manzi’s right to emphasize that a simplistic interpretation of regression results can be misleading.

2. Regressions of observational data can be a good way of going beyond raw comparisons and averages.

Some of this discussion reminds me of the literature on the wage premium for risk, where people run regressions on salaries for comparable jobs in order to estimate how much people need to be paid to risk death or injury.. Based on my reading is that these studies can’t be trusted: if you’re not careful, you can easily estimate the value of life to be negative–after all, the riskiest jobs (lumberjack, etc.) tend to pay poorly, while the best-paying jobs (being Bill Gates, etc.) are pretty safe gigs. With care, you can get those regressions to give reasonable coefficients in the range of $1 million per life, but I don’t really see these numbers as meaning anything at all; they’re just the results of fiddling with the models until something reasonable comes out. I’m not saying that the people who do these analyses are cheating, just that they want reasonable results but the models seem too open-ended to be a good measure of risk premiums.

Jonathan Cohn at TNR:

Am I certain Keefe is right? No. Having spent some time reporting on public and private sector compensation before, I can tell you that there is a lot of disagreement over the proper way to adjust the raw compensation figures to account for variables like age, education, and so on. (The debate is as much philosophical as methodological: Some conservatives argue that public employers put an artificial premium on graduate education, effectively paying more for degrees that don’t make workers better qualified.) I haven’t seen a specific refutation of Keefe’s report on Wisconsin, but if you want to read an analysis that suggests public workers, in general, are over-compensated, Andrew Biggs of the American Enterprise Institute has done work along those lines–and has a new article in the Weekly Standard summarizing his views.

But I wonder if this whole debate misses the point. Suppose public workers really do make more than private sector workers. Who’s to say that the problem is public workers making too much, rather than private sector workers making too little?

Andrew Biggs at AEI:

While we’ll have a longer piece out on Wisconsin pay soon, I figured that in response to Cohn’s post I’d raise a couple issues regarding EPI’s report.

First, we’ve found a lower salary penalty for Wisconsin public employees than EPI did (around -5 percent versus -11 percent in EPI’s study). It’s not clear what’s driving the difference, since we’re using the same data, but that’s something to track down. It’s also worth noting that both our calculations and EPI’s control for firm size; this means that essentially we’re comparing Wisconsin public employees not to all private workers, but to employees at the very largest Wisconsin firms, who tend to pay more generous salaries and benefits. Whether to control for firm size is an open question, since if a given public employee didn’t work for the government there’s a good chance he wouldn’t work at a large private firm. But readers at least should be aware of the issue.

Second, the benefits shown in the EPI report aren’t actually for Wisconsin alone. They’re an average for the “East North Central Census Division,” which comprises Illinois, Indiana, Michigan, Ohio, and Wisconsin. Because the Bureau of Labor Statistics doesn’t publish compensation data at the state level (due to small sample sizes) regional figures are the best we’ve got. The problem is, if Wisconsin government workers get relatively better benefits than public employees in other states—which seems to be part of the argument that Governor Walker is making—then these figures will understate true compensation. For instance, in practice Wisconsin public employees make essentially no contribution toward their pensions (formally they must contribute around 5 percent of pay, but their employers almost always cover it). Nationally, public employees contribute an average of around 5.7 percent of pay to their pensions.

Third, the benefit measures in the EPI study are based on what employers pay, not what employees actually receive. This matters for public-sector defined-benefit pensions, which use much more optimistic investment return assumptions than private pensions (a 7.8 percent assumed return in the Wisconsin Retirement System, versus around a 4 percent riskless return in U.S. Treasury securities) and fund their benefits accordingly. Most economists think public pensions are wrong to make these assumptions, but what matters is that employees effectively receive those higher returns whether the investments pan out or not. Adjusting for the differences in implicit returns to pensions would increase total Wisconsin compensation by around 4 percent.

Fourth, and related, is that the EPI study omits the value of retiree health benefits, which most public workers receive but most private employees don’t. (Some very large firms still offer retiree health benefits, but they’re increasingly rare and increasingly stingy.) The value of retiree healthcare can vary significantly. For instance, most run-of-the-mill Wisconsin state retirees are offered the right to buy into the employee plan. This provides an implicit subsidy, since they’re buying at rates calculated for the working-age population rather than their own health risk. The value of this is equal to a percent or so of extra pay every year. Other employees, such as Milwaukee teachers, have almost all their premiums paid for them. Actuarial reports list these protections as costing over 17 percent of salaries, meaning that for these workers EPI’s approach would miss a lot of benefit income. In addition, even these actuarial studies value retiree health coverage at employer cost, not the benefit to the employee. A retired 60-year-old purchasing coverage in the individual market would pay significantly more than the reported cost of his public-sector retiree health plan, because individual coverage costs more than group coverage. Some studies place the cost differential at around 25 percent; the Congressional Budget Office’s health insurance model appears to assume something larger: they say that “once differences in the characteristics of nongroup versus ESI [employer sponsored insurance] policyholders are considered and different loading costs are considered, a typical nongroup policy has roughly 60 percent of the relative plan value of an average ESI policy. That finding is supported by a recent survey of nongroup and ESI premiums and relative plan values in California.” So we know something is being missed and we have good reason to believe that even when we find actuarial reports calculating the cost of retiree health coverage, it’s still an underestimate. Unfortunately, there’s no central data source for retiree health benefits, meaning there’s a lot of digging to get a correct answer.

Fifth, the EPI report doesn’t calculate the value of public-sector job security. In a given year, a state/local worker has less than one-third the chance of being fired or laid off as a private worker. There’s a long history in economics (back to Adam Smith, actually) of thinking in terms of “compensating wage differentials,” although it’s only in the last 20 years or so that there’s been much progress in measuring them. We took a somewhat different approach, of using financial tools to calculate the price of an insurance policy that would protect against job loss and counting the value of that insurance toward public-sector pay. In theory each should produce the same answer, but as always things are messy. There may be a way of using CPS data to get on top of this, though.

At the end of the day, I just don’t think we can make any final conclusions on state/local pay because so much of the data, particularly on the benefits end, is still too loosey-goosey. There’s just more work to be done. (At the federal level, though, the measured overpayment is so large that I’m willing to say I’m convinced.)

Ezra Klein, responding to Manzi:

Jim Manzi has posted a critique of the Economic Policy Institute’s study (PDF) suggesting that Wisconsin’s public-sector workers are underpaid relative to their private-sector counterparts. It basically boils down to the argument that this sort of thing is hard to measure. The study controls for most every observable worker characteristic that we can imagine controlling for. But there are, Manzi says, an “all-but-infinite” number of differences beyond that. Perhaps going into the public sector says something about a person’s level of ambition, or ability to take risks and tolerate stress, or tendency to innovate — something that, in turn, makes the private-sector worker worth more or less to the economy.

And fair enough. Maybe there is some systemic difference between Hispanic women with bachelor’s degrees and 20 years of work experience who put in 52-hour weeks in the public sector and Hispanic women with bachelor’s degrees and 20 years of work experience who put in 52-hour weeks in the private sector. If anyone has some evidence for that, I’m open to hearing it. But the EPI study is aimed at a very specific and very influential claim: that Wisconsin’s state and local employees are clearly overpaid. It blows that claim up. Even in Manzi’s critique, there’s nothing left of it. So at this point, the burden of proof is on those who say Wisconsin’s public employees make too much money.

Reihan Salam on Klein’s response:

I was struck by this sentence: “Even in Manzi’s critique, there’s nothing left of it.” I’ve known Jim for many years and I’ve read just about everything he’s written, including a few things that haven’t been published. I have never seen Jim write that Wisconsin’s state and local employees are clearly overpaid, or indeed that any employees are clearly overpaid. There are many right-wingers who’ve said that, but it’s not the way Jim has ever thought about the issue as far as I know.

I don’t want to put words in Jim’s mouth, here’s what I consider a slightly more Manzian take: the problem with public sector compensation is that there is often very little clarity in terms of whether or not taxpayers are getting a good deal. One of the big reasons right-wingers are so hot for merit pay, based on my limited experience, is that they’re generally pretty comfortable with the idea of at least some public workers making much more than they are making now, provided other workers who’d be willing to work for less because they’re not likely to attract better offers are either paid less or fired.

Let me underline this point: Some public workers, like really great federal procurement officers, might very well be “underpaid,” in that they’re always on the verge of jumping ship to better opportunities, they’re stressed about money all the time when they could be using their awesome Jedi procurement skills to save taxpayers money, and we could attract other awesome people to do this job if only we weren’t such tightwads. Others might be “overpaid,” in that there are people who really like the stability of working for a “firm” that will, short of invasion and military conquest, probably exist for at least another ten years and would be open to working for a bit less money if they had no choice in the matter. Do you think we have more of the former than the latter? That’s where analyses like Keefe’s come in, to offer a rough guide to the conversation.

I would love for conservatives to do a better job of talking about public sector compensation. The basic conflict is whether we think of creating more jobs, work effort, etc., as our goal, or if our goal is to deliver a service. If the latter is our goal, we presumably want to do it in the most cost-effective way, so that we can devote our time, money, and energy to other things we like doing more. By extension, this suggests that we really do want to pay people as little as we can to get the things that we want. Or:

Reihan Salam says:

We really do want to pay people as little as we can to get the things that we want.

What a bozo!

This relentless process of delivering services and goods for less money really does destroy jobs, but, in theory at least, it allows us to create new ones. We happen to be living in a historical moment when there’s not a lot of faith in that idea, partly because we’ve seen a steady decline in labor force participation rates due to tangle of implicit marginal tax rates, an incarceration crisis, interrelated social pathologies, and much else. I’m biased in favor of believing that we will create new job opportunities because almost everyone I’m close to works in jobs that they could not have done in the way they do them now even ten years ago. The goal is to use good public policy to bridge over transitional periods, and, by the way, a dynamic market economy is always in a transitional period.

Manzi responds to Klein:

Klein is correct to say that my post “basically boils down to the argument that this sort of thing is hard to measure.” But he then argues that the purpose of the original study was not to demonstrate that public sector workers are underpaid, but rather to rebut the claim that they are overpaid:

[T]he EPI study is aimed at a very specific and very influential claim: that Wisconsin’s state and local employees are clearly overpaid. It blows that claim up.

That may have been the author’s motivation, but here is the final conclusion of the executive summary of the report:

[P]ublic sector workers in Wisconsin earn less in annual or hourly compensation than they would earn in the private sector.

The report makes a positive claim that it has determined a compensation “penalty” for working in the public sector, and repeats it many times. My argument was that this report does not establish whether or not this claim is true.

By the same logic, it also fails to “blow up” the claim that Wisconsin’s public workers are overpaid. The methodology is inadequate to the task of establishing whether these workers are overpaid, underpaid, or paid perfectly. As the last paragraph of my post put it:

I don’t know if Wisconsin’s public employees are underpaid, overpaid, or paid just right. But this study sure doesn’t answer the question.

Statistician and political scientist Andrew Gelman has a very interesting response to my post, in which he agrees that this conclusion “sounds about right,” but cautions that the study is not “completely useless either” because this kind of adjusted comparison is better than simply comparing raw averages between public and private sector workers. I agree with that entirely. But that is, of course, a very different thing than saying that these adjustments create sufficient precision to support the bald statement, made in the report, that the author has analytically established that there is a “penalty” for working in the public sector.

Megan McArdle:

It’s obvious that this study doesn’t control for everything we can imagine, because it doesn’t even control for the matters that are of central dispute in Wisconsin: protection from being fired.  This is, as people on both sides keep noting, so extraordinarily valuable that workers are willing to give up quite a lot to get it.  And of course, a job that offers this sort of protection is likely to attract workers who especially value it.  All government jobs offer this perk, which is valuable to the workers and costly to the employers; ceteris paribus, I’d expect that other compensation would be lower to compensate.

Obviously, it also doesn’t control in any way for other job or worker characteristics that effect compensation; jobs working for state and local government are systematically different from other sorts of jobs, because so much of what the government does isn’t done by anyone else.  Though, oddly, for the teachers at the heart of this dispute, we do have a good comparison: private school teachers. And as I understand it, public school teachers have higher wages, and much better benefits, than private school teachers.
To which I expect the union’s boosters will say, “But jobs in private school are much more enjoyable–they don’t have to teach the difficult kids!”  Indeed, they’re right.  Which is exactly the point: there’s huge unobserved variable bias here.
There’s also the fact that the EPI study seems to be looking at means, which are going to be dragged upwards by a small number of highly compensated workers, particularly in the educated group.  But state and local wages are capped.  Meanwhile, some of the highest paid jobs in the private sector are in areas like commission sales, which have no counterpart in government. That means that the median worker is probably making much more than the median worker in the private sector.  This may not be true in some lucrative fields such as law and medicine–but even there, we tend to compare government lawyers to the highly paid people at white shoe firms or corporations, not the legions of struggling will-drafters and ambulance-chasers.
You can argue, of course, that this is an ideologically much more attractive income distribution.  Which highlights, I think, the core difference between the way people like Manzi and I look at this, and the way that progressives do.  I don’t think of state employment as a way to create, in miniature, my ideal labor utopia.  I think of it as a way to procure services.  I define people as being “overpaid” not if they are paid more than someone with a similar level of education, but if they are paid more than I need to entice to pay to attract adequate workers.  To analyze that, looking at medians is probably somewhat more instructive than looking at means.
Of course I agree with Manzi that this still doesn’t really tell us whether state workers are overpaid, underpaid, or just-right-paid.  I suspect that the answer is probably “both”–adjusting for worker quality, the median government worker is probably overpaid, while in skilled specialties, salaries are probably not attracting as much of the top-flight talent as we’d ideally like.  (This is why I have been advocating, futilely, that we make it possible to pay SEC employees multiples of what the President of the United States makes.)  But as Manzi, who does this stuff for a living, will undoubtedly tell you, setting compensation is a really hard problem that no one’s got a very good handle on.  So that’s just a suspicion, based on my experience of state bureaucracies, and my best guess at the incentive effects of the current structure.  I don’t have enough data to back me up.  And neither does EPI.
More Manzi:

Have I then set up a nihilistic position that we can never know anything tolerably well because I can just keep raising these points that might matter, but are not included in the model? In effect, have I put any analyst in the impossible position of proving a negative? Not really. Here’s how you measure the accuracy of a model like this without accepting its internal assumptions: use it to make predictions for future real world experiments, and then see if its predictions are right or not. The formal name for this is falsification testing. This is what’s lacking in all of the referenced arguments in support of these models.

Human capital models, fixed effects models, and other various pattern-finding analyses are useful to help build theories, but a metaphysical debate about the “worth” of various public versus private sector jobs based upon them is fundamentally unproductive. For one thing, it won’t ever end. And as Megan McArdle correctly put it, the practical question in front of us is whether we the taxpayers can procure the public work that we want at a lower cost (or more generally, though less euphoniously, whether we are at the practical optimum on the cost-quality trade-off). If you want an analytical answer to this question, here is what I would do: randomly select some jurisdictions, job classifications or other subsets of public workers, cut their compensation, and then see if we can observe a material reduction in net value of output in these areas versus the control areas. If not, cut deeper. And keep cutting deeper, until we find our indifference point.

There would be obvious limitations to this approach. First, generalizing the results of initial experiments is not straightforward. Second evaluating output is not straightforward for many areas of government. But at a minimum, and unlike the world of endlessly dueling regressions, this would at least let us see the real-world effects of various public compensation levels first-hand, and allow the public to make an informed decision about whether they prefer the net effect of a change to public sector compensation or not.

Leave a comment

Filed under Education, Go Meta

The Asteroid Can Hit If It Means We No Longer Have To Listen To Bad Aerosmith Songs

Mark Kleiman:

When I saw that Rand Paul (R-Comedy Central) had voted against a bill outlawing the use of lasers to blind airline pilots on the grounds that “the states ought to take care of it,” I was reminded of this week’s best Onion story imagining an effort by Republicans to repeal a law providing for the destruction of an asteroid coming at the Earth.

The Onion story didn’t mention lawsuits seeking to have asteroid-destruction declared unconstitutional as a violation of the limited, delegated powers of the Federal government. But I’d be grateful if one of our libertarian-leaning readers could point me to the specific provision of the Constitution under which the Federal government could spend money on asteroid destruction. It’s not, properly speaking, defense, unless the asteroid was deliberately launched at us by the Klingons. The asteroid isn’t “in commerce” at all, so it can’t be covered by the Commerce Clause.

No doubt some socialists would assert that the reference to “the General Welfare” in the first sentence of Art. 1, Sec. 8, plus the Necessary and Proper clause at the end of that section, would cover asteroid destruction. And I might agree with them. But of course from the libertarian perspective that proves way, way too much.

So I offer this as a challenge: If you think that the doctrine of limited powers forbids much of what the federal government currently does, please explain why that same argument wouldn’t forbid spending money to shoot down an asteroid.

Footnote If your objections to “big government” are based on economics rather than constitutional law, please explain why the public-goods argument that justifies shooting down the asteroid doesn’t apply to the programs you don’t like.

Pejman Yousefzadeh:

As a libertarian-conservative, I am glad to help resolve this question. Of course, it should be noted from the outset that the framing of these kinds of questions is a common Kleimanian tactic; he tosses out an appealing public policy approach, and then dares readers to conclude that the approach may not be constitutional. I certainly agree with Kleiman that asteroid defense cannot be covered by the Commerce Clause (thank goodness that there are some limits recognized by the Left on the reach and scope of the Clause), but I don’t see why he is so quick to dismiss asteroid destruction as a defense measure merely because the asteroid was not “deliberately launched at us by the Klingons.”Original public meaning jurisprudence assists us in showing how asteroid destruction can be justified by Art. I, Sec. 8 of the Constitution as being “for the common Defence.” I am indebted to Professor Larry Solum for his excellent and comprehensive definition of original public meaning jurisprudence, which is excerpted below:

The original-meaning version of originalism emphasizes the meaning that the Constitution (or its amendments) would have had to the relevant audience at the time of its adoptions. How would the Constitution of 1789 have been understood by an ordinary adult citizen at the time it was adopted? Of course, the same sources that are relevant to original intent are relevant to original meaning. So, for example, the debates at the Constitutional Convention in Philadelphia may shed light on the question how the Constitution produced by the Convention would have been understood by those who did not participate in the secret deliberations of the drafters. But for original-meaning originalists, other sources become of paramount importance. The ratification debates and Federalist Papers can be supplemented by evidence of ordinary usage and by the constructions placed on the Constitution by the political branches and the states in the early years after its adoption. The turn to original meaning made originalism a stronger theory and vitiated many of the powerful objections that had been made against original-intentions originalism.

This sets the stage for what is sometimes called “the New Originalism”  and also is called “Original Meaning Originalism.”   Whatever the actual origins of this theory, the conventional story identifies Antonin Scalia as having a key role.  As early as 1986, Scalia gave a speech exhorting originalists to “change the label from the Doctrine of Original Intent to the Doctrine of Original Meaning.”   The phrase “original public meaning” seems to have entered into the contemporary theoretical debates in the work of Gary Lawson  with Steven Calabresi as another “early adopter.”   The core idea of the revised theory is that the original meaning of the constitution is the original public meaning of the constitutional text.

Randy Barnett  and Keith Whittington  have played prominent roles in the development of the “New Originalism.”  Both Barnett and Whittington build their theories on a foundation of “original public meaning,” but they extend the moves made by Scalia and Lawson in a variety of interesting ways.  For the purposes of this very brief survey, perhaps their most important move is to embrace the distinction between “constitutional interpretation” understood as the enterprise of discerning the semantic content of the constitution and “constitutional construction,” which we might tentatively define as the activity of further specifying constitutional rules when the original public meaning of the text is vague (or underdeterminate for some other reason).  This distinction explicitly acknowledges what we might call “the fact of constitutional underdeterminacy.”   With this turn, original-meaning originalist explicitly embrace the idea that the original public meaning of the text “runs out” and hence that constitutional interpretation must be supplemented by constitutional construction, the results of which must be guided by something other than the semantic content of the constitutional text.

Once originalists had acknowledged that vague constitutional provisions required construction, the door was opened for a reconciliation between originalism and living constitutionalism.  The key figure in that reconciliation has been Jack Balkin, whose influential 2006 and 2007 essays Abortion and Original Meaning and Original Meaning and Constitutional Redemption have argued for a reconciliation of original meaning originalism with living constitutionalism in the form of a theory that might be called “the method of text and principle.”  Balkin has called his position on the relationship between originalism and living constitutionalism “comptibilism,” but it is important to understand that this means that an originalist approach to interpretation is consistent with a living constitutionalist approach to construction.

Per Professor Solum’s definition, we have to ask how “the common Defence” would “have been understood by an ordinary adult citizen at the time it was adopted.” Specifically, we have to demonstrate that the notion of “Defence” against a threat does not depend upon that threat being initiated by a sentient being, or group of beings. This entails showing Kleiman that the non-presence of Klingons or any other sentient beings in a scenario which features an asteroid threatening life on Earth does not prevent the necessary countermeasures from being considered constitutional as acts of “Defence.”

In order to proceed along this line of inquiry, a definition of “defence” or “defense” (however one wishes to spell it) is needed. I can think of no better lexicographical authority than Samuel Johnson’s A Dictionary of the English Language. Consider especially the following bit of information: In his book Dr Johnson’s Dictionary: The Extraordinary Story of the Book that Defined the World, the writer Henry Hitchings quoted Joseph Emerson Worcester as saying that “[Johnson’s] Dictionary has also played its part in the law, especially in the United States. Legislators are much occuped with ascertaining ‘first meanings,’ with trying to secure the literal sense of their predecessors’ legislation . . . Often it is a matter of historicizing language: to understand a law, you need to understand what its terminology meant to its original architects . . . as long as the American Constitution remains intact, Johnson’s Dictionary will have a role to play in American law.”

So, Johnson’s Dictionary was/is quite useful when it comes to analyzing bodies of American law. Now, we have to ask what Johnson wrote about the definition of the word “defence.” Well, it just so happens that we can look. Feel free to examine the definitions of “defence,” “defenceless,” “to defend,” and “defendable.” One will find that none of the definitions in question make it necessary for a threat to have been launched by some form of sentient being, or group of beings, before one can be said to organize and implement some kind of “defense/defence” against that threat via preventive measures. Absent any competing definitions of similar or greater influence, one may reasonably conclude that “an ordinary adult citizen” would not have understood “defence” to mean a countermeasure against a threat set into motion by a sentient being, or group of beings–like Klingons, for example. A “defence” can therefore be mounted against a threat that appeared or emerged sua sponte, without any sentient beings or higher intelligence having brought that threat into being, and/or having directed that threat against us.

Indeed, if Kleiman wanted to get a libertarian legal analysis regarding this issue, he might have done well to ask Glenn Reynolds, whose blog is full of posts regarding the need for asteroid defense. I recognize that Kleiman loathes Reynolds, and has nothing but contempt for him, but it perhaps would not have been a bad idea for Kleiman to put his loathing aside and consider that Reynolds’s example might indicate that there are plenty of libertarians who (a) are concerned about defending the Earth against extinction-causing asteroids, and (b) might be able to justify it (as I have) constitutionally. As a general matter, it might be best for Kleiman to consult actual lawyers regarding constitutional or statutory interpretation, before trying to navigate legal thickets on his own. I mean, it’s his blog, and he can do what he wants, but it is worth noting that past Kleimanian efforts to play lawyer have ended quite poorly.

Jonathan Adler:

This post by Mark Kleiman is a good example, in that it puts forward a laughable caricature of libertarian and originalist constitutional thought that would have been discredited with but a moment’s investigation into the question (as I noted here, and Pejman Yousefzadeh discussed here).  To Prof. Kleiman’s credit, he backed off (a little) when other took the time to respond, but that a prominent, thoughtful academic would post something like this as an ostensibly thoughtful critique of right-leaning ideas says quite a bit about the state of much academic discourse.

Sasha Volokh:

I agree with Jonathan below that the Constitution (through the spending power) allows Congress to spend tax money to protect the Earth from an asteroid.

On the other hand — and at the risk of confirming Mark Kleiman in his belief that libertarians are loopy — I don’t speak for all libertarians, but I think there’s a good case to be made that taxing people to protect the Earth from an asteroid, while within Congress’s powers, is an illegitimate function of government from a moral perspective. I think it’s O.K. to violate people’s rights (e.g. through taxation) if the result is that you protect people’s rights to some greater extent (e.g. through police, courts, the military). But it’s not obvious to me that the Earth being hit by an asteroid (or, say, someone being hit by lightning or a falling tree) violates anyone’s rights; if that’s so, then I’m not sure I can justify preventing it through taxation.

Bryan Caplan once suggested the asteroid hypo to me as a reductio ad absurdum against my view. But a reductio ad absurdum doesn’t work against someone who’s willing to be absurd, and I may be willing to bite the bullet on this one.

On the other hand, if you could show that, once the impending asteroid impact became known, all hell would break loose and lots of rights be violated by looters et al. during the ensuing anarchy, I could justify the taxation as a way of preventing those rights violations; but this wouldn’t apply if, say, the asteroid impact were unknown to the public.

This does make me uncomfortable, much like my view that patents are highly useful but morally unjustifiable, so I’m open to persuasion

Matthew Yglesias:

I think this is a mistake about how a reductio works. The mere fact that Volokh is willing to bite this bullet has no real bearing on the fact that the conclusion is clearly false, and so the argument is either logically invalid or else proceeds from false premises. I’d say “false premises.” The best liberal thinking—classical, modern, whatever—proceeds from broadly consequentialist ideas about making human beings better off.

Brad DeLong:

So not only does Sasha Volokh claim that it is immoral to tax people to blow up an asteroid (or install lightning rods, or mandate lightning rods, or pay for a tree-trimming crew on the public roads), but it is immoral to tell people of an approaching asteroid so they can scramble to safety because it will cause violations of rights through looting.

Wow.

Ilya Somin:

That said, I don’t think that Sasha’s view is necessarily ridiculous or “insane.” Any theory based on absolute respect for certain rights necessarily carries the risk that it will lead to catastrophe in some instances. Let’s say you believe that torture is always wrong. Then you would not resort to it even in a case where relatively mild torture of a terrorist is the only way to prevent a nuclear attack that kills millions. What if you think that it’s always wrong to knowingly kill innocent civilians? Then you would oppose strategic bombing even if it were the only way to defeat Nazi Germany in World War II. How about absolute rights to freedom of political speech? If you are committed to them, that means you oppose censorship even if it’s the only way to prevent Nazi or communist totalitarians from coming to power and slaughtering millions.

Many such scenarios are improbable. But over the long sweep of human history, improbable events can and do happen. Had Kerensky suppressed the Bolsheviks in 1917 (as he easily could have that summer) or had the Weimar Republic done the same with the Nazis, the world would be a vastly better place, even though most political censorship (even of evil ideologies) causes far more harm than good. A civilization-destroying asteroid attack during the next few hundred years is also a low-probability event.

Thus, the potential flaw in Sasha’s view is one that it shares with all absolutist rights theories. Scenarios like the above are one of the main reasons why I’m not a rights-absolutist myself. But I don’t believe that all the great moral theorists who endorse such views from Kant to the present are either ridiculous or “insane.”

It’s also worth noting that Sasha’s approach would in fact justify asteroid defense in virtually any plausible real world scenario. As he puts it, “if you could show that, once the impending asteroid impact became known, all hell would break loose and lots of rights be violated by looters et al. during the ensuing anarchy, I could justify the taxation as a way of preventing those rights violations; but this wouldn’t apply if, say, the asteroid impact were unknown to the public.” It’s highly unlikely that news of an impending asteroid impact whose onset was known to the government could be prevented from leaking to the general public. Even if it could, “all hell” would surely break loose after the asteroid impact, resulting in numerous violations of libertarian rights by looters, bandits, people stealing food out of desperation, and so on. Either way, Sasha’s analysis ends up justifying asteroid defense.

If I understand Sasha correctly, he’s only partially a rights absolutist. He doesn’t believe that you can ever sacrifice rights for utilitarian benefits, even truly enormous ones. But he does think that you can justify small rights violations as a way of forestalling bigger ones. Sasha is an absolutist when it comes to trading off libertarian rights for other considerations, but a maximizer when it comes to trading off rights for greater protection of those same rights in the future. Effective defense against a massive asteroid impact easily passes Sasha’s rights-maximizing test.

Obviously, I welcome correction from Sasha if I have misinterpreted his views.

Mark Kleiman:

I’m glad that Adler agrees with me – and disagrees with many Tea Party lunatics, including some recently elected to the Senate and the House – that there’s no actual Constitutional question about funding the Department of Education or National Public Radio. That, of course, was my point.

I’m also glad that Sasha is standing by his guns, thus demonstrating that my argument was not directed at a mere straw man, though his objection to spending is philosophical rather than Constitutional.

Sasha worries that his honest and forthright response might confirm me in my belief that “libertarians are loopy.” That’s certainly a reasonable concern. But I would have thought that a bigger concern would be that the conclusion is, in fact, obviously loopy, and – like any good reductio ad absurdum argument, ought to lead to a re-examination of the premises that would lead to such a loopy conclusion.

Ilya Somin is right to point out that any theory that puts an absolute constraint on action runs into problems when inaction has catastrophic consequences. But if he really can’t see the difference between torture and income taxation – can’t understand why absolute opposition to torture is not analogous to absolute opposition to public spending on public goods – then “loopy” is entirely too weak a word.

Eugene Volokh:

I leave it to others to debate the constitutional and moral merits of government spending on asteroid defense (my view is that such spending is both constitutionally permissible and morally proper, but I have nothing original to add on the subject). I just wanted to add that one side of the debate is an unusually near-literal application of the saying, “Let justice be done, though the heavens fall.”

Noah Millman at The American Scene:

An impending catastrophe – asteroid strike – threatens to kill everyone in the society. That doesn’t violate anyone’s “rights” because you don’t have a “right to life” but rather a right not to have your life taken away by somebody else against your will. Therefore, the government has no right to tax you to protect you – and everybody else – from the asteroid.

So how is the asteroid to be stopped?

Presumably, everyone in society would agree voluntarily to cooperate to stop the asteroid. That is to say: we could still have collective action, but it would have to be voluntary, not coerced.

But would everyone participate?

The government goes around, passing the hat for contributions to stop the asteroid. A certain percentage of people, though, don’t believe in asteroids. Another percentage believe that the asteroid will bring the Rapture and so must not be stopped. These people are crazy, though, and crazy people are not interesting to talk about. Let’s hope there aren’t too many and ignore them.

Some people, though, notice that there are wealthier people than them in the society, and figure those other people should shoulder the burden of saving society. These are the “free-riders.”

Now, so long as this group is relatively small, no problem. Enough people will still put up enough money to stop the collective catastrophe. But so long as that is the case, free-riding is the economically rational thing to do. Indeed, in any large enough society, free-riding is always the rational thing to do: in a society with enough people putting up enough money voluntarily to stop the asteroid, free-riding is costless; in a society without enough such people, contributing is pointless.

The salvation of this ultra-libertarian society, then, depends upon the existence of a sufficient number of irrationally self-sacrificing people, people who ignore their rational self-interest in order to procure a social good for the group, without regard for the amount of “free riding” going on around them.

On the assumption – which I don’t think is pushing it at all – that there are a whole lot of communal problems that require collective action to address, libertarianism is only practical in highly communitarian societies.

I don’t know that that’s a knock-down argument against libertarianism. Wikipedia is a highly communitarian activity that grew up in a highly libertarian environment (the Internet), and most of the world is free-riding.

But it’s worth stressing nonetheless, because libertarians tend to talk as if rationality will lead to the necessary level of cooperation. But it won’t. In any case of communal threat where attempted free-riders cannot independently exposed to the threat, while contributors are protected, the rational thing to do is free-ride.

1 Comment

Filed under Conservative Movement, Go Meta, The Constitution

I’ll Take Skynet Is Taking Over The World For $800, Alex

Ken Jennings at Slate:

When I was selected as one of the two human players to be pitted against IBM’s “Watson” supercomputer in a special man-vs.-machine Jeopardy! exhibition match, I felt honored, even heroic. I envisioned myself as the Great Carbon-Based Hope against a new generation of thinking machines—which, if Hollywood is to believed, will inevitably run amok, build unstoppable robot shells, and destroy us all. But at IBM’s Thomas J. Watson Research Lab, an Eero Saarinen-designed fortress in the snowy wilds of New York’s Westchester County, where the shows taped last month, I wasn’t the hero at all. I was the villain.

This was to be an away game for humanity, I realized as I walked onto the slightly-smaller-than-regulation Jeopardy! set that had been mocked up in the building’s main auditorium. In the middle of the floor was a huge image of Watson’s on-camera avatar, a glowing blue ball crisscrossed by “threads” of thought—42 threads, to be precise, an in-joke for Douglas Adams fans. The stands were full of hopeful IBM programmers and executives, whispering excitedly and pumping their fists every time their digital darling nailed a question. A Watson loss would be invigorating for Luddites and computer-phobes everywhere, but bad news for IBM shareholders.

The IBM team had every reason to be hopeful. Watson seems to represent a giant leap forward in the field of natural-language processing—the ability to understand and respond to everyday English, the way Ask Jeeves did (with uneven results) in the dot-com boom. Jeopardy! clues cover an open domain of human knowledge—every subject imaginable—and are full of booby traps for computers: puns, slang, wordplay, oblique allusions. But in just a few years, Watson has learned—yes, it learns—to deal with some of the myriad complexities of English. When it sees the word “Blondie,” it’s very good at figuring out whether Jeopardy! means the cookie, the comic strip, or the new-wave band.

I expected Watson’s bag of cognitive tricks to be fairly shallow, but I felt an uneasy sense of familiarity as its programmers briefed us before the big match: The computer’s techniques for unraveling Jeopardy! clues sounded just like mine. That machine zeroes in on key words in a clue, then combs its memory (in Watson’s case, a 15-terabyte data bank of human knowledge) for clusters of associations with those words. It rigorously checks the top hits against all the contextual information it can muster: the category name; the kind of answer being sought; the time, place, and gender hinted at in the clue; and so on. And when it feels “sure” enough, it decides to buzz. This is all an instant, intuitive process for a human Jeopardy! player, but I felt convinced that under the hood my brain was doing more or less the same thing.

Indeed, playing against Watson turned out to be a lot like any other Jeopardy! game, though out of the corner of my eye I could see that the middle player had a plasma screen for a face. Watson has lots in common with a top-ranked human Jeopardy! player: It’s very smart, very fast, speaks in an uneven monotone, and has never known the touch of a woman. But unlike us, Watson cannot be intimidated. It never gets cocky or discouraged. It plays its game coldly, implacably, always offering a perfectly timed buzz when it’s confident about an answer. Jeopardy! devotees know that buzzer skill is crucial—games between humans are more often won by the fastest thumb than the fastest brain. This advantage is only magnified when one of the “thumbs” is an electromagnetic solenoid trigged by a microsecond-precise jolt of current. I knew it would take some lucky breaks to keep up with the computer, since it couldn’t be beaten on speed.

Instapundit:

DID THE SINGULARITY just happen on Jeopardy? The Singularity is a process, more than an event, even if, from a long-term historical perspective, it may look like an event. (Kind of like the invention of agriculture looks to us now). So, yeah. “In the CNN story one of the machine’s creators admitted that he was a very poor Jeopardy player. Somehow he was able to make a machine that could do better than himself in that contest. The creators aren’t even able to follow the reasoning of the computer. The system is showing emergent complexity.”

mistermix:

I’m not a big Jeopardy geek, but my understanding is that players are surprised at how big a role button management plays in winning or losing a round. In the few minutes of the Watson game that I watched, it was pretty clear that Watson was excellent at pressing the button at exactly the right moment if it knew the answer, which is more a measure of electromechanical reflex than human-like intelligence.

To the credit of IBM engineers, Watson almost always did know the right answer. Still, there were a few bloopers, such as the final Jeopardy question from yesterday (paraphrasing): “This city has two airports, one named after a World War II hero, and the other named after a World War II battle.” Watson’s guess, “Toronto”, was just laughably bad—Lester Pearson and Billy Bishop fought in World War I, and neither person is a battle. The right answer, “Chicago”, was pretty obvious, but apparently Watson couldn’t connect Midway or O’Hare with WW II.

Mark Krikorian at The Corner:

I was on the show, in 1996 or ’97, and success is based almost entirely on your reflexes — i.e., pushing the buzzer as soon as Trebek finishes reading the question, er, the answer. (I came in second, winning a dining-room set and other fabulous parting gifts, which I had to sell to pay the taxes on them.)The benefit to society would come if we could turn Alex Trebek into Captain Dunsel.

Jim Behrle at The Awl:

If I owned a gun, it would probably be in my mouth as I type this. I don’t know how the physics of that arrangement would work, but the mood in Chez Jim is darker than Mothra’s hairy crotch. I’ve just been sitting here listening to Weird Al’s weirdly prescient “I Lost on Jeopardy” in the dark, cuddling with a tapped-out bottle of WD-40. Humanity took a hit tonight. Our valiant human heroes made it close, but that Watson tore us new assholes in our foreheads. ALL OF US. That noise you heard driving to work was your GPS system laughing at you. While you were sneezing on the D train this morning your Kindle was giving you the finger. There is blood in the water this morning and this afternoon and forever more. This wasn’t like losing some Nerdgame like chess. Who the hell even knows how to play chess? The horsies go in little circles, right? “Jeopardy!” is the game that makes dumb people feel smart. Like National Public Radio, it’s designed to make people feel superior. And we just found out that people are not superior. No, not at all.I might personally call the whole thing a draw. I read Ken Jennings’ piece in Slate and I can tell the machine was just better at ringing the buzzer than him. If it was truly a battle of Humanity versus Accursed Frankensteinian Monstrosity there should have been one human and one monstrosity. Or one smart human, one machine and me. I could answer sportsy questions. And the rest of the time stay out of Ken’s way. No disrespect to Brad, but this is one fight that ought to have been fought one-on-one. Don’t make humans battle each other to save the world from machines. It’s too cruel. I’d sit back and let the goddamned human expert answer the tough questions. I’d just be there to figure out a way how to unplug the fucking thing when no one was watching. So, here’s the lineup for this Rematch that I demand, formally, right here on The Awl—which I know everyone at IBM reads—Me, Ken and your little Betamax.

And you have to put a little more at stake than just money. For Ken, Me and the Watson. Why did they call it Watson, anyway? Wasn’t Watson just Sherlock Holmes’ butler? And Alexander Graham Bell’s friend who was in the other room and got the first phone call. Why not call the thing what it is: HYDE. Or LILITH. Or Beezelbub of the Underland? Its dark, soulless visage no doubt crushed the very spirit of our human champions. Maybe force it to wear a blonde wig. And talk in Valley Girl language. “Like Oh My God, Gag Me with a Spoon, Alex. I’ll like take like Potpourri for like $800!”

This rematch should happen on Neutral Ground. I suggest Indianapolis. Halftime at the next Super Bowl. This gives Ken a chance to put the pieces of his broken ego back together. And for me to eat some Twinkies. There probably won’t even Be a Super Bowl because of the Looming Lockout, so America will just be watching commercials and various superstars mangling America’s Favorite Patriotic songs. Make IBM take their little Cabinet of Wonders on the Road. Get the military involved to make sure there are no shenanigans this time like plugging it into the Internet or texting it answers from the audience. Also, I want the damned thing to NOT be plugged into the Jeopardy game. It needs to be able to hear Alex and to read the hint on the little blue screen. How much time does it take a human to hear Alex and see it printed out and understand just what the hell the half-idiot writers of “Jeopardy!” were getting at? (Was a Dave Eggers mention really necessary during Wednesday night’s episode? The category was Non-fiction. And it’s obvious that Watson has some kind of super Amazon app embedded in its evil systems. The first 200 pages of Dave’s Heartbreaking Work of Staggering Genius were pretty good. Everything else is Twee Bullshit. “I am a dog from a short story. I am fast and strong. Too bad you know I die in the river from the title of this short story. Woooof!” I mean, seriously, “Jeopardy!” Get a library card. There are billions of other writers and I’ve seen at least 5 shows in which you’ve used some form of Dave Eggers. )

Ben Wieder at The Chronicle Of Higher Education:

The victory made one group of people very happy.

The computer-science department at the University of Texas at Austin hosted viewing parties for the first two nights of the competition.

“People were cheering for Watson,” says Ken Barker, a research scientist at Texas. “When they introduced Brad and Ken, there were a few boos in the audience.”

Texas is one of eight universities whose researchers helped develop the technology on which Watson is based. Many of the other universities hosted viewing parties for the three days of competition as well.

Mr. Barker says he was blown away by Watson’s performance on the show, particularly the computer’s ability to make sense of Jeopardy!‘s cleverly worded clues.

But the computer did make a few mistakes along the way.

Most notably, Watson incorrectly wrote “Toronto” in response to a Final Jeopardy clue in the category of U.S. Cities. Both Mr. Jennings and Mr. Rutter returned the correct response, which was Chicago.

Mr. Barker says Watson may have considered U.S. to be a synonym of America and, as such, considered Toronto, a North American city, to be a suitable response.

Raymond J. Mooney, a computer-science professor at Texas, says Final Jeopardy is the Achilles heel of the computer.

“If it didn’t have to answer that question, it wouldn’t have,” he says.

Clues in that final round are often more complicated than others in the show because they involve multiple parts.

The phrasing of the question Watson got wrong included what linguists refer to as an ellipsis, an omitted phrase whose meaning is implicit from other parts of the sentence. The clue that tripped up Watson, “Its largest airport is named for a World War II hero; its second largest, for a World War II battle,” left out “airport is named” in the second clause.

Mr. Mooney says it will be some time before the average person will be using a computer with the capabilities of Watson, but he did see one potential immediate impact from the show.

Ezra Klein:

The sentient computers of the future are going to think it pretty hilarious that a knowledge-based showdown between one of their own and a creature with a liver was ever considered a fair fight.

Leave a comment

Filed under Go Meta, Technology, TV

“Star Wars… Nothing But Star Wars…”

Michael Lind at Salon:

On the left, technological optimists were replaced by Rousseauian romantic primitivists. In the 1970s, Green guru Amory Lovins promulgated the gospel that “hard” sources of energy like nuclear power are bad and that called for a “soft path” based on hydropower, wind and solar energy. Other Green romantics decided that even hydropower is wicked, because it is generated by dams that despoil the prehuman landscape.

The New Left of the 1960s and 1970s longed for small, participatory communities, and rejected the giant organizations that New Deal liberals had taken pride in. In the 1980s and 1990s, new urbanists converted most progressives to their nostalgia for the ephemeral rail-and-trolley based towns of the late nineteenth century. GM foods, which New Deal liberals like Franklin Roosevelt and Lyndon Johnson would have embraced as a way to feed multitudes while sparing land for wilderness, were denounced by progressives who favored “heirloom” turkey and melons that the Pilgrims might have eaten. The increasingly reactionary American left, disenchanted with nuclear power plants and rockets and suburbs, longed to quit modernity and retire to a small town with an organic farmers’ market and an oompah band playing in the town park’s bandstand.

A similar intellectual regression to infantilism took place on the right in the late twentieth century. Between the 1930s and the 1970s, conservatism was defined by big business anti-statism, not by neotraditionalism. The Republican opponents of New Deal Democrats shared the New Dealers’ faith in science, technology and large-scale industry. They just wanted business to keep more of its prerogatives.

Contrast Eisenhower-era business conservatism with the religious right of Pat Robertson, Jerry Falwell and other evangelicals and fundamentalists in the late 20th and early 21st centuries. By 2000, an entire national party, the Republicans, was intimidated by religious zealots. No Republican presidential candidate could support legal abortion or criticize the pseudoscientific “creationist” alternative to evolutionary biology. Hatred of biotechnology, in the form of GM foods and human genetic engineering, was shared by the regressives of the left and the right. First a Democratic president, Jimmy Carter, then a Republican, George W. Bush, sought votes by claiming he had been “born again” with the help of Jesus, something that no president before the 1970s would have claimed.

Today optimism about science and technology is found chiefly on the libertarian right. At least somebody still defends nuclear energy and biotechnology. But in libertarian thought, science and technology are divorced from their modernist counterparts — large-scale public and private organizations — and wedded to ideals of small producers and unregulated markets that were obsolete by the middle of the nineteenth century. Libertarian thought is half-modern, at best. To its credit, it does not share the longing of many on the left for the Shire of Frodo the Hobbit or the nostalgia of most of the contemporary right for the Little House on the Prairie.

If there was a moment when the culture of enlightened modernity in the United States gave way to the sickly culture of romantic primitivism, it was when the movie “Star Wars” premiered in 1977. A child of the 1960s, I had grown up with the optimistic vision symbolized by “Star Trek,” according to which planets, as they developed technologically and politically, graduated to membership in the United Federation of Planets, a sort of galactic League of Nations or UN. When I first watched “Star Wars,” I was deeply shocked. The representatives of the advanced, scientific, galaxy-spanning organization were now the bad guys, and the heroes were positively medieval — hereditary princes and princesses, wizards and ape-men. Aristocracy and tribalism were superior to bureaucracy. Technology was bad. Magic was good.

The Dark Age that began in the 1970s continues. Today’s conservatives, centrists, progressives — most look like regressives, by the standards of mid-20th century America. Tea Party conservatives argue that federal prohibitions on child labor are unconstitutional, that the Fourteenth Amendment should be repealed, and that the Confederates were right about states, rights. Religious conservatives, having lost some of their political power, continue to their fight against Darwinism. Fiscally conservative “centrists” in Washington share an obsession with balanced budgets that would have seemed irrational and primitive not only to Keynes but also to the 19th-century British founder of The Economist, Walter Bagehot. And while there is a dwindling remnant of modernity-minded New Deal social democrats, most of the energy on the left is found on the nostalgic farmers’market/ train-and-trolley wing of the white upper middle class.

Here’s an idea. America needs to have a neomodernist party to oppose the reigning primitivists of the right, left and center. Let everyone who opposes abortion, wants to ban GM foods and nuclear energy, hates cars and trucks and planes and loves trains and trolleys, seeks to ban suburbia, despises consumerism, and/or thinks Darwin was a fraud join the Regressive Party. Those of us who believe that the real, if exaggerated, dangers of technology, big government, big business and big labor are outweighed by their benefits can join the Modernist Party. While the Regressives secede from reality and try to build their premodern utopias on their reservations, the Modernists can resume the work of building a secular, technological, prosperous, and relatively egalitarian civilization, after a half-century detour into a Dark Age.

Cathleen Kaveny at dotCommonweal:

It strikes me that this new two-party system would also leave many Catholics without a home –for obvious reasons, which we DON’T need to discuss here. In other words, THIS IS NOT A POST ON ABORTION.

But the underlying question, which I DO want to discuss here, is what is the Catholic idea on progress?  It strikes me that it is complicated. Any ideas?

Andrew Sullivan

Daniel Larison:

One of the things that Lind’s preferred states all have in common is that they are expansive, bureaucratic, centralized states ruled by autocrats or unaccountable overseers, and they are capable of extracting far larger revenues out of their economies than their successors. Obviously, Lind finds most of these traits desirable, and he seems not terribly bothered by the autocracy. In the case of the UFP, one simply has a technocrat’s utopian post-political fantasy run riot. Indeed, the political organization of the Federation has always struck me as stunningly implausible and unrealistic even by the standards of science fiction. It was supposed to be a galactic alliance with a massive military whose primary purposes were exploration and peacekeeping, and which had overcome all social problems by dint of technological progress. If ever there were a vision to appeal to a certain type of romantic idealists with no grasp of the corrupting nature of power or the limits of human nature, this would have to be it.

Lind’s article is not very persuasive, not least since his treatment of the change from antiquity to the middle ages is seriously flawed. Lind writes:

But few would disagree that the Europe of Charlemagne was more backward in its mindset, at least at the elite level, than the Rome of Augustus or the Alexandria of the Ptolemies.

Nor are the great gains of decolonization and personal liberation in recent decades necessarily incompatible with an intellectual and cultural Dark Age. After all, the fall of the Roman empire led to the emergence of many new kingdoms, nations and city-states, and slavery withered away by the end of the Middle Ages in Europe.

Well, count me among the “few” that would disagree. For one thing, the “Europe of Charlemagne” was also the Europe of the Byzantines, and under both the Carolingians and the Macedonians later in the ninth century there was extensive cultivation of literary and artistic production that significantly undermines claims that this was an “intellectual and cultural Dark Age.” This was an era of substantial manuscript production, and one marked by the learning of Eriugena and Photios. The Carolingian period was actually one of the more significant moments of political reunification in Europe prior to the later middle ages, but it is true that Charlemagne and his successors did not have a large administrative state apparatus at their disposal. The Iconoclastic emperors in the east were hostile to religious images, but in many other respects they cultivated learning and drew on the mathematical and scientific thought that was flourishing at that time among the ‘Abbasids. Obviously, we are speaking of the elite, but it is the elites of different eras that Lind is comparing. The point is not to reverse the old prejudice against medieval Europe and direct it against classical antiquity, nor we do have to engage in Romantic idealization of medieval societies, but we should acknowledge that this approach to history that Lind offers here abuses those periods and cultures that do not flatter the assumptions or values of modern Westerners. For that matter, it distorts and misrepresents the periods and cultures moderns adopt as their precursors, because it causes them to value those periods and cultures because of how they seem to anticipate some aspect of modernity rather than on their own terms.

Ioz on Larison:

I understand that Gene Roddenberry’s retromod vision of the future had Kirk kissing Nichelle Nichols, but even before the stylish sixties gave way to the weird, hierarchical, technocratic dictatorship of The Next Generation, the United Federation of Planets played barely the part of a supernumerary. The governing organization always seemed to be Starfleet, whose motto . . . to boldly go . . . and shoot with lasers . . . Their missions of exploration always seemed to lead to armed conflict, and the bold, interracial, transspecies future had as a model of its money-free, egalitarian, merit-based society something more or less directly descended from the British Admiralty, circa Trafalgar.

Meanwhile, if we must read Star Wars as something other than someone talking that old hack and fraud Joe Campbell a leeeetle bit too seriously, then let me just remind you that the “advanced, scientific, galaxy-spanning organization” was an evil empire run by a cyborg monster and an evil wizard, and that in almost every visual detail its model was not the New goddamn Deal, but the Third fucking Reich.

Ross Douthat:

So here’s my question: What did Lind think of the prequels? Because in a sense, George Lucas addressed nearly all of Lind’s issues with the “Star Wars” universe in movies one through three. (I am bracketing the more creative interpretations of those films …) Queen Amidala of Naboo, Princess Leia’s mother, turned out to be an elected queen, who moved on to senatorial duties after serving out her term as monarch. (How a teenager managed to navigate Naboo’s version of the Iowa caucuses remains a mystery …) The once-mystical Force was given a scientific explanation, in the form of the “midichlorians,” the micro-organisms that clutter up the bloodstream of the Jedi and give them telekinetic powers as a side effect. And the lost Old Republic that the rebels fight to restore in the original films was revealed to be , well, “a sort of galactic League of Nations or UN,” with the Jedi Knights as its peacekeeping force and the lightsaber as the equivalent of the blue helmet.

For Lind, then, I can only assume that watching the prequels was an immensely gratifying experience. And for the rest of us, the knowledge that Lind’s prescription for “Star Wars” helped produce three of the most disappointing science-fiction blockbusters ever made should be reason enough to reject his prescription for America without a second thought.

Leave a comment

Filed under Go Meta, Movies

This Really Annoys People. Yes it Does. Oh Yes, It Does.

Farhad Manjoo at Slate:

Last month, Gawker published a series of messages that WikiLeaks founder Julian Assange had once written to a 19-year-old girl he’d become infatuated with. Gawker called the e-mails “creepy,” “lovesick,” and “stalkery”; I’d add overwrought, self-important, and dorky. (“Our intimacy seems like the memory of a strange dream to me,” went a typical line.) Still, given all we’ve heard about Assange’s puffed-up personality, the substance of his e-mail was pretty unsurprising. What really surprised me was his typography.

Here’s a fellow who’s been using computers since at least the mid-1980s, a guy whose globetrotting tech-wizardry has come to symbolize all that’s revolutionary about the digital age. Yet when he sits down to type, Julian Assange reverts to an antiquated habit that would not have been out of place in the secretarial pools of the 1950s: He uses two spaces after every period. Which—for the record—is totally, completely, utterly, and inarguably wrong.

Oh, Assange is by no means alone. Two-spacers are everywhere, their ugly error crossing every social boundary of class, education, and taste. You’d expect, for instance, that anyone savvy enough to read Slate would know the proper rules of typing, but you’d be wrong; every third e-mail I get from readers includes the two-space error. (In editing letters for “Dear Farhad,” my occasional tech-advice column, I’ve removed enough extra spaces to fill my forthcoming volume of melancholy epic poetry, The Emptiness Within.) The public relations profession is similarly ignorant; I’ve received press releases and correspondence from the biggest companies in the world that are riddled with extra spaces. Some of my best friends are irredeemable two spacers, too, and even my wife has been known to use an unnecessary extra space every now and then (though she points out that she does so only when writing to other two-spacers, just to make them happy).

What galls me about two-spacers isn’t just their numbers. It’s their certainty that they’re right. Over Thanksgiving dinner last year, I asked people what they considered to be the “correct” number of spaces between sentences. The diners included doctors, computer programmers, and other highly accomplished professionals. Everyone—everyone!—said it was proper to use two spaces. Some people admitted to slipping sometimes and using a single space—but when writing something formal, they were always careful to use two. Others explained they mostly used a single space but felt guilty for violating the two-space “rule.” Still others said they used two spaces all the time, and they were thrilled to be so proper. When I pointed out that they were doing it wrong—that, in fact, the correct way to end a sentence is with a period followed by a single, proud, beautiful space—the table balked. “Who says two spaces is wrong?” they wanted to know.

Typographers, that’s who. The people who study and design the typewritten word decided long ago that we should use one space, not two, between sentences. That convention was not arrived at casually. James Felici, author of the The Complete Manual of Typography, points out that the early history of type is one of inconsistent spacing. Hundreds of years ago some typesetters would end sentences with a double space, others would use a single space, and a few renegades would use three or four spaces. Inconsistency reigned in all facets of written communication; there were few conventions regarding spelling, punctuation, character design, and ways to add emphasis to type. But as typesetting became more widespread, its practitioners began to adopt best practices. Felici writes that typesetters in Europe began to settle on a single space around the early 20th century. America followed soon after.

Tom Lee:

I’m sorry, but no. It’s a lousy polemic. Here’s its structure:

  1. SEO-friendly statement of controversy
  2. Presentation of opinion A. Assertion that people who hold it are rubes.
  3. Presentation of opinion B. Invocation of authority.
  4. History lesson! Discussion of old technology; no mention of enforcement of author’s preferred orthodoxy by newer technology (e.g. HTML rendering multiple spaces as one)
  5. Rumination on beauty. Grecian urns, etc.

For now let’s ignore the ignore the bullying nature of this argument (it should be obvious to anyone that those of us who believe in two spaces are a minority that’s relentlessly and mercilessly persecuted by the bloodthirsty masses, both through jeremiads like Manjoo’s and through the technological eradication of our ability to express our beliefs). Which of the points in the above argument are rhetorically meaningful?

Only point 3 really carries any weight with me. I’ll take Manjoo’s word that all typographers like a single space between sentences. I’m actually pretty sympathetic to arguments from authority, being the big-state-loving paternalist that I am. But, with apologies to friends and colleagues of mine who care passionately about this stuff, I lost my patience with the typographically-obsessed community when they started trying to get me to pay attention to which sans-serif fonts were being used anachronistically on Mad Men.

I love you guys, but you’re crazy. On questions of aesthetic preference there’s no particular reason that normal people should listen to a bunch of geeky obsessives who spend orders of magnitude more time on these issues than average. It’s like how you probably shouldn’t listen to me when I tell you not to use .doc files or that you might want to consider a digital audio player with Ogg Vorbis support. I strongly believe those things, but even I know they’re pointless and arbitrary for everyone who doesn’t consider “Save As…” an opportunity for political action.

Nor should we assume that just because typographers believe earnestly in the single space that their belief is held entirely in good faith. They’re drunk on the awesome power of their proportional fonts, and sure of the cosmic import of the minuscule kerning decisions that it is their lonely duty to make. Of course they don’t want lowly typists exercising their opinions about letter spacing. Those people aren’t qualified to have opinions!

Shani O. Hilton:

I thought Manjoo’s argument was weak, for many of the reasons Tom mentions, but that doesn’t change facts. Here’s a little-known law of graphic design:

The number of people wishing to fit a document onto the same or fewer number of pages as a previous edition of said document, despite the new draft being longer than the previous edition, is directly proportional to the number of people who turn in said document to their graphic designer with double spaces after every period.

Okay, maybe I made that up. But real talk: Double spaces are bad.

Megan McArdle:

Let me just add: if you’re spending time worrying over whether my emails contain one or two spaces, you need to ask them to let you out of the asylum more often so you can pursue a more interesting hobby.  I double space after sentences because I learned to type on a manual typewriter, and it’s not worth the effort to retrain myself.  Even if typographers groan every time they open one of my missives.

Nicholas Jackson at The Atlantic

Paul Waldman at Tapped:

As Manjoo explains, there are still teachers out there infecting students’ minds with the idea that they should put two spaces after a period. Why? Because that’s the way they learned. And I did too, when I took a typing class in 1985. But now we have computers, and fonts that use proportional spacing, which makes two spaces after a period look wrong. Wrong, wrong, wrong.

We’re never going to maintain our global dominance if people keep doing this. You think that 10-year-old kid in Shanghai is being taught to put two spaces after a period? No way.

Leave a comment

Filed under Go Meta