Tag Archives: Tech Crunch

Mr. Sulzberger, Tear Down This Wall

Jeremy W. Peters at NYT:

The New York Times rolled out a plan on Thursday to begin charging the most frequent users of its Web site $15 for a four-week subscription in a bet that readers will pay for news they have grown accustomed to getting free.

Beginning March 28, visitors to NYTimes.com will be able to read 20 articles a month without paying, a limit that company executives said was intended to draw in subscription revenue from the most loyal readers while not driving away the casual visitors who make up a vast majority of the site’s traffic.

Once readers click on their 21st article, they will have the option of buying one of three digital news packages — $15 every four weeks for access to the Web site and a mobile phone app, $20 for Web access and an iPad app or $35 for an all-access plan.

All subscribers who receive the paper through home delivery will have free and unlimited access across all Times digital platforms except, for now, e-readers like the AmazonKindle and the Barnes & Noble Nook. Subscribers to The International Herald Tribune, which is The Times’s global edition, will also have free digital access.

“A few years ago it was almost an article of faith that people would not pay for the content they accessed via the Web,” Arthur Sulzberger Jr., chairman of The New York Times Company, said in his annual State of The Times remarks, which were delivered to employees Thursday morning.

Felix Salmon:

Rather than take full advantage of their ability to change the numbers over time, the NYT seems to have decided they’re going to launch at the kind of levels they want to see over the long term. Which is a bit weird. Instead, the NYT has sent out an email to its “loyal readers” that they’ll get “a special offer to save on our new digital subscriptions” come March 28. This seems upside-down to me: it’s the loyal readers who are most likely to pay premium rates for digital subscriptions, while everybody else is going to need a special offer to chivvy them along.

This paywall is anything but simple, with dozens of different variables for consumers to try to understand. Start with the price: the website is free, so long as you read fewer than 20 items per month, and so are the apps, so long as you confine yourself to the “Top News” section. You can also read articles for free by going in through a side door. Following links from Twitter or Facebook or Reuters.com should never be a problem, unless and until you try to navigate away from the item that was linked to.

Beyond that, $15 per four-week period gives you access to the website and also its smartphone app, while $20 gives you access to the website also its iPad app. But if you want to read the NYT on both your smartphone and your iPad, you’ll need to buy both digital subscriptions separately, and pay an eye-popping $35 every four weeks. That’s $455 a year.

The message being sent here is weird: that access to the website is worth nothing. Mathematically, if A+B=$15, A+C=$20, and A+B+C=$35, then A=$0.

Andrew Sullivan:

We remain parasitic on the NYT and other news sites; and I should add I regard the NYT website as the best news site in the world; without it, we would be lost. But like most parasites, we also perform a service for our hosts. We direct readers to content we think matters. So we add to the NYT’s traffic and readership.

But what makes this exception even more interesting is that, if I read it correctly, it almost privileges links from blogs and social media against more direct access. Which makes it a gift to the blogosphere. Anyway, that’s my first take: and it’s one of great relief. We all want to keep the NYT in business (well, almost all of us). But we also don’t want to see it disappear behind some Great NewsCorp-Style Paywall. It looks to me as if they have gotten the balance just about right.

MG Siegler at Tech Crunch:

There are a lot of interesting angles to the news this morning about The New York Times’ new paywall. Top news will remain free, a set number of articles for all users will remain free, there will be different pricing tiers for different devices, NYT is fine with giving Apple a 30 percent cut, etc, etc. But to me, the most interesting aspect is only mentioned briefly about halfway down the NYT announcement article: all those who come to the New York Times via Facebook or Twitter will be allowed to read for free. There will be no limit to this.

Up until now, we’ve seen paywall enthusiasts like The Wall Street Journal offer such loopholes. But they’ve done so via Google. It’s a trick that most web-savvy news consumers know. Is a WSJ article behind a paywall? Just Google the title of it. Click on the resulting link and boom, free access to the entire thing. No questions asked. This new NYT model is taking that idea and flipping it.

The Google loophole will still be in play — but only for five articles a day. It’s not clear how they’re going to monitor this (cookies? logins?), but let’s assume for now that somehow they’ll be able to in an effective way. For most readers, the five article limit will likely be more than enough. But that’s not the important thing. What’s interesting is that the NYT appears to be saying two things. First, this action says that spreading virally on social networks like Twitter and Facebook is more important to them than the resulting traffic from Google. And second, this is a strategic bet that they likely believe will result in the most vocal people on the web being less pissed off.

Cory Doctorow at Boing Boing:

Here are some predictions about the #nytpaywall:

1. No one will be able to figure out how it works. Quick: How many links did you follow to the NYT last month? I’ll bet you a testicle* that you can’t remember. And even if you could remember, could you tell me what proportion of them originated as a social media or search-engine link?

2. Further to that, people frequently visit the NYT without meaning to, just by following a shortened link. Oftentimes, these links go to stories you’ve already read (after all, you’ve already found someone else’s description of the story interesting enough to warrant a click, so odds are high that a second or even a third ambiguous description of the same piece might attract your click), but which may or may not be “billed” to your 20-freebies limit for the month

3. And this means that lots of people are going to greet the NYT paywall with eye-rolling and frustration: You stupid piece of technology, what do you mean I’ve seen 20 stories this month? This is exactly the wrong frame of mind to be in when confronted with a signup page (the correct frame of mind to be in on that page is, Huh, wow, I got tons of value from the Times this month. Of course I’m going to sign up!)

4. Which means that lots of people will take countermeasures to beat the #nytpaywall. The easiest of these, of course, will be to turn off cookies so that the Times’s site has no way to know how many pages you’ve seen this month

5. Of course, the NYT might respond by planting secret permacookies, using Flash cookies, browser detection, third-party beacons, or secret ex-Soviet vat-grown remote-sensing psychics. At the very minimum, the FTC will probably be unamused to learn that the Grey Lady is actively exploiting browser vulnerabilities (or, as the federal Computer Fraud and Abuse statute puts it, “exceeding authorized access” on a remote system — which carries a 20 year prison sentence, incidentally)

6. Even if some miracle of regulatory capture and courtroom ninjarey puts them beyond legal repercussions for this, the major browser vendors will eventually patch these vulnerabilities

7. And even if that doesn’t work, someone clever will release one or more of: a browser redirection service that pipes links to nytimes.com through auto-generated tweets, creating valid Twitter referrers to Times stories that aren’t blocked by the paywall; or write a browser extension that sets “referer=twitter.com/$VALID_TWEET_GUID”, or some other clever measure that has probably already been posted to the comments below

8. The Times isn’t stupid. They’ll build all kinds of countermeasures to detect and thwart cookie-blocking, referer spoofing, and suchlike. These countermeasures will either be designed to err on the side of caution (in which case they will be easy to circumvent) or to err on the side of strictness — in which case they will dump an increasing number of innocent civilians into the “You’re a freeloader, pay up now” page, which is no way to convert a reader to a customer

Yes, I was going to hate this paywall no matter what the NYT did. News is a commodity: as a prolific linker, I have lots of choice about where I link to my news and the site that make my readers shout at me about a nondeterministic paywall that unpredictably swats them away isn’t going to get those links. Leave out the hard news and you’ve got opinion, and there’s no shortage of free opinion online. Some of it is pretty good (and some of what the Times publishes as opinion is pretty bad).

Peter Kafka at All Things Digital:

The Times will put up its paywall in 11 days, on March 28th. It promises to comply with Apple’s subscription terms by making “1-click purchase available in the App Store by June 30 to ensure that readers can continue to access Times apps on Apple devices.”

And as previously announced, this isn’t a formal payall. Or, at least, it’s a porous one.

Anyone can use the Times’ Web site to read up to 20 articles a month for free. And if you’ve surpassed your monthly limit, you’ll still be able to read Times articles if you’ve been sent there from referring sites like Facebook, Twitter or anywhere else on the Web. The Times says it will place a five-article-per-day limit on Google referrals, however; it’s currently the only search engine with that limit, Murphy says.

To spell that out: If you want to game the Times’ paywall, just use Microsoft’s Bing. For now, at least.

Advertisements

Leave a comment

Filed under Mainstream, New Media

Arianna Told Me To Write This Blog Post

Arianna Huffington at The Huffington Post:

I’ve used this space to make all sorts of important HuffPost announcements: new sections, new additions to the HuffPost team, new HuffPost features and new apps. But none of them can hold a candle to what we are announcing today.

When Kenny Lerer and I launched The Huffington Post on May 9, 2005, we would have been hard-pressed to imagine this moment. The Huffington Post has already been growing at a prodigious rate. But my New Year’s resolution for 2011 was to take HuffPost to the next level — not just incrementally, but exponentially. With the help of our CEO, Eric Hippeau, and our president and head of sales, Greg Coleman, we’d been able to make the site profitable. Now was the time to take leaps.

At the first meeting of our senior team this year, I laid out the five areas on which I wanted us to double down: major expansion of local sections; the launch of international Huffington Post sections (beginning with HuffPost Brazil); more emphasis on the growing importance of service and giving back in our lives; much more original video; and additional sections that would fill in some of the gaps in what we are offering our readers, including cars, music, games, and underserved minority communities.

Around the same time, I got an email from Tim Armstrong (AOL Chairman and CEO), saying he had something he wanted to discuss with me, and asking when we could meet. We arranged to have lunch at my home in LA later that week. The day before the lunch, Tim emailed and asked if it would be okay if he brought Artie Minson, AOL’s CFO, with him. I told him of course and asked if there was anything they didn’t eat. “I’ll eat anything but mushrooms,” he said.

The next day, he and Artie arrived, and, before the first course was served — with an energy and enthusiasm I’d soon come to know is his default operating position — Tim said he wanted to buy The Huffington Post and put all of AOL’s content under a newly formed Huffington Post Media Group, with me as its president and editor-in-chief.

I flashed back to November 10, 2010. That was the day that I heard Tim speak at the Quadrangle conference in New York. He was part of a panel on “Digital Darwinism,” along with Michael Eisner and Adobe CEO Shantanu Narayen.

At some point during the discussion, while Tim was talking about his plans for turning AOL around, he said that the challenge lay in the fact that AOL had off-the-charts brand awareness, and off-the-charts user trust and loyalty, but almost no brand identity. I was immediately struck by his clear-eyed assessment of his company’s strengths and weaknesses, and his willingness to be so up front about them.

As HuffPost grew, Kenny and I had both been obsessed with what professor Clayton Christensen has famously called “the innovator’s dilemma.” In his book of the same name, Christensen explains how even very successful companies, with very capable personnel, often fail because they tend to stick too closely to the strategies that made them successful in the first place, leaving them vulnerable to changing conditions and new realities. They miss major opportunities because they are unwilling to disrupt their own game.

After that November panel, Tim and I chatted briefly and arranged to see each other the next day. At that meeting, we talked not just about what our two companies were doing, but about the larger trends we saw happening online and in our world. I laid out my vision for the expansion of The Huffington Post, and he laid out his vision for AOL. We were practically finishing each other’s sentences.

Two months later, we were having lunch in LA and Tim was demonstrating that he got the Innovator’s Dilemma and was willing to disrupt the present to, if I may borrow a phrase, “win the future.” (I guess that makes this AOL’s — and HuffPost’s — Sputnik Moment!)

There were many more meetings, back-and-forth emails, and phone calls about what our merger would mean for the two companies. Things moved very quickly. A term sheet was produced, due diligence began, and on Super Bowl Sunday the deal was signed. In fact, it was actually signed at the Super Bowl, where Tim was hosting a group of wounded vets from the Screamin’ Eagles. It was my first Super Bowl — an incredibly exciting backdrop that mirrored my excitement about the merger and the future ahead.

Jack Shafer at Slate:

I underestimated Arianna Huffington when she launched her Huffington Post in May 2005. I didn’t trash the site the way Nikki Finke did, though. Finke called Huffington the “Madonna of the mediapolitic world [who] has undergone one reinvention too many,” and slammed her site as a “humongously pre-hyped celebrity blog” that represented the “sort of failure that is simply unsurvivable.” And those were among Finke’s nicer comments.

Instead of critiquing Huffington’s debut copy, I speculated as to whether she was up to the job of “impresario.” In the scale of things, my write-up is more embarrassing today, now that Huffington has sold the Post to AOL for $315 million, than is Finke’s pissy take. Huffington has proved herself a first-rate entrepreneur, incubator of talent, and media visionary.

Felix Salmon:

My feeling, then, is that this deal is a good one for both sides. AOL gets something it desperately needs: a voice and a clear editorial vision. It’s smart, and bold, to put Arianna in charge of all AOL’s editorial content, since she is one of the precious few people who has managed to create a mass-market general-interest online publication which isn’t bland and which has an instantly identifiable personality. That’s a rare skill and one which AOL desperately needs to apply to its broad yet inchoate suite of websites.

As for HuffPo, it gets lots of money, great tech content from Engadget and TechCrunch, hugely valuable video-production abilities, a local infrastructure in Patch, lots of money, a public stock-market listing with which to make fill-in acquisitions and incentivize employees with options, a massive leg up in terms of reaching the older and more conservative Web 1.0 audience and did I mention the lots of money? Last year at SXSW I was talking about how ambitious New York entrepreneurs in the dot-com space have often done very well for themselves in the tech space, but have signally failed to engineer massive exits in the content space. With this sale, Jonah Peretti changes all that; his minority stake in HuffPo is probably worth more than the amount of money Jason Calacanis got when he sold Weblogs Inc to AOL.

And then, of course, there’s Arianna, who is now officially the Empress of the Internet with both power and her own self-made dynastic wealth. She’s already started raiding big names from mainstream media, like Howard Fineman and Tim O’Brien; expect that trend to accelerate now that she’s on a much firmer financial footing.

Paul Carr at TechCrunch:

We really have to stop being scooped by rivals on news affecting our own company.

Tonight, courtesy of a press release that our parent company sent to everyone but us, we learn that AOL has acquired the Huffington Post for $315 million. More interestingly, Arianna Huffington has been made Editor In Chief of all AOL content, including TechCrunch.

Now, no-one here has been more skeptical than me of AOL’s content strategy. I was reasonably scathing about that whole “tech town” bullshit and I was quick to opinion-smack Tim Armstrong in the face over his promise that “90% of AOL content will be SEO optimized” by March. Hell I’ve stood on stage – twice – on TC’s dime and described our overlords as “the place where start-ups come to die”.

And yet and yet, for once I find myself applauding Armstrong – and AOL as a whole – for pulling off a double whammy: a brilliant strategic acquisition at a logical price. As AOL’s resident inside-pissing-insider, I can’t tell you how frustrating that is. I can’t even bust out a Bebo joke.

An important note before I go on: I have no idea how any of this will affect TechCrunch. So far AOL has kept true to its promise not to interfere with our editorial and there’s no reason to suppose that will change under Huffington. That said, it would be idiotic to think that our parents’ content strategy – particularly the SEO stuff – won’t have annoying trickle-down consequences for all of us in the long term.

As I wrote the other week, I hate SEO. It’s bad for journalism as it disincentivises reporters from breaking new stories, and rewards them for rehashing existing ones. And it’s bad for everything else because, well, it’s garbage. But when discussing the SEO phenomenon privately, I’ve always cited the Huffington Post as the exception that proves the rule.

Arianna Huffington’s genius is to churn out enough SEO crap to bring in the traffic and then to use the resulting advertising revenue – and her personal influence – to employ top class reporters and commentators to drag the quality average back up. And somehow it works. In the past six months journostars like Howard Fineman, Timothy L. O’Brien and Peter Goodman have all been added to the HuffPo’s swelling masthead, and rather than watering down the site’s political voice, it has stayed true to its core beliefs. Such is the benefit of being bank-rolled by a rich liberal who doesn’t give a shit.

Ann Althouse:

What difference does it make? AOL as a brand meant something to me in the 1990s, but not now. Who cares whether AOL retains a semblance of political neutrality? In any case, mainstream media always feels pretty liberal, so why would anyone really notice. Now, that quote is from the NYT, so… think about it. The NYT would like to be the big news site that looks neutral (but satisfies liberals). HuffPo is the raging competition, which needs to be put in its place.

Alexis Madrigal at the Atlantic

Erick Schonfeld at TechCrunch

Kevin Drum:

Last night I saw a tweet saying that AOL was going to buy the Huffington Post for $31.5 million. Yowza, I thought. That’s a pretty rich valuation. Maybe 20x forward earnings? Who knows?

But no! AOL actually bought HuffPo for $315 million. I mentally put in a decimal place where there wasn’t one. I don’t even know what to think about this. It sounds completely crazy to me. The odds of this being a good deal for AOL stockholders seem astronomical.

Still, maybe I’m the one who’s crazy. After all, I haven’t paid a lot of attention to either HuffPo or AOL lately. I’m a huge skeptic of synergy arguments of all kinds, but maybe Arianna is right when she says that in this deal, 1+1=11

Peter Kafka at Media Memo:

So maybe AOL + HuffPo won’t equal 11. And maybe 10x Huffington Post’s reported 2010 revenue is a very pre-Lehman multiple. But the broad strokes here make sense to me:

AOL is pushing its workers very hard to make more content it can sell. HuffPo is a content-making machine:

Huffington Post still has the reputation as a left-leaning political site written by Arianna Huffington’s celebrity pals. In reality, it is most concerned with attracting eyeballs anyway it can. Sometimes it’s with well-regarded investigative journalism, and much more often it’s via very aggressive, very clever aggregation. And sometimes it’s by simply paying very, very close attention to what Google wants, which leads to stories like “What Time Does The Super Bowl Start?

However they’ve done it, it’s worked–much more efficiently than AOL, which is headed in that direction as well. AOL reaches about 112 million people in the U.S. every month with a staff of 5,000. The Huffington Post, which employed about 200 people prior to the deal, gets to about 26 million.*

AOL can start selling this stuff immediately:

HuffPo reportedly generated around $30 million in revenue last year, but that was done using a relatively small staff that sales chief Greg Coleman had just started building. AOL’s much bigger sales group, which has just about finished its lengthy reorg, should be able to boost that performance immediately.

AOL can afford it:

Tim Armstrong’s company ended 2010 with $725 million in cash, much of which it generated by selling off old assets. This seems like a relatively easy check to write and one that shouldn’t involve a lot of overlapping staff–AOL figures it will save $20 million annually in cost overlaps, but that it will spend about $20 million this year on restructuring charges. HuffPo is about four percent of AOL’s size, and several of its top executives are already stepping aside. (This is the second time in two years that sales boss Greg Coleman has been moved out of a job by Tim Armstrong.) The biggest risk here will be in the way that Huffington, who is now editor in chief for all of AOL’s edit staff, gets along with her new employees. On the other hand, morale is low enough at many AOL sites that it will be hard to make things worse.

AOL Gets a Really Big Brand:

There’s some downside risk to attaching Arianna Huffington’s name to a big, mainstream media brand, as her politics and/or persona might scare off some readers and/or advertisers. But two years after Armstrong arrived from Google, AOL still doesn’t have a definable identity, other than “the Web site your parents might still pay for even though there’s no reason to do so.” Being known as “the guys who own Huffington Post” is infinitely better than that.

HuffPo’s “pro” list is much shorter, but only because there’s not much to think about for them: Huffington, co-founder Kenneth Lerer and their backers get a nice return on the five years and $37 million they put into the company. And those who stay on get to leverage the benefits of a much larger acquirer–access to more eyballs and more advertisers. Easy enough to understand.

Dan Lyons at The Daily Beast:

No doubt Hippeau and Lerer and Huffington were drinking champagne last night, but the truth is, this deal is not a victory for either side. It’s a slow-motion train wreck and will end in disaster.

Listen to Nick Denton, who runs Gawker, which now becomes the biggest independent Web-based news outlet. “I’m disappointed in the Huffington Post. I thought Arianna Huffington and Kenny Lerer were reinventing news, rather than simply flipping to a flailing conglomerate,” he told me.

Denton insists he has no intention of ever selling Gawker, and he seems not-so-secretly pleased to see his opponents cashing out: “AOL has gathered so many of our rivals— Huffington Post, Engadget, Techcrunch—in one place. The question: Is this a fearsome Internet conglomerate or simply a roach motel for once lively websites?”

One big problem with the deal is that Arianna Huffington now runs editorial for AOL properties, which include tech sites Engadget and TechCrunch. Those sites are both accustomed to being free-wheeling, fiercely independent and fiercely competitive—so competitive, in fact, that recently they’ve been battling with each other.

Michael Arrington, who runs TechCrunch and just sold it to AOL a few months ago, is an abrasive, big-ego, sometimes obnoxious guy. He’s a friend of mine, so I mean this in the best possible way. But I can’t imagine him working for Arianna.

The other, bigger problem is AOL itself. AOL touts itself as a media company, but as Ken Auletta reported in The New Yorker recently, most of what AOL publishes is junk, and 80 percent of its profits come from a rather seedy little business—charging subscription fees from longtime users who don’t realize that they no longer need to pay for AOL service, and could be getting it free.

The other problem is that AOL’s chief executive, Tim Armstrong, is a sales guy. He ran sales at Google before he came to AOL in 2009. Nothing wrong with sales guys, except when they start telling people how to do journalism. Sales guys deal in numbers. But journalism is about words. Sales guys live in a world where everything can be measured and analyzed. Their version of journalism is to focus on things like “keyword density” and search-engine optimization.

Journalists live in a world of story-telling, and where the value of a story, its power to resonate, is something they know by instinct. Some people have better instincts than others. Some people can improve their instincts over time. The other part of storytelling is not the material itself but how you present it. Some can spin a better tale out of the same material than others.

But no great storyteller has ever been someone who started out by thinking about traffic numbers and search engine keywords.

Leave a comment

Filed under New Media

So Does It Come Out Every Day?

The Daily:

New York, NY, February 2, 2011 – Today Rupert Murdoch, Chairman and Chief Executive Officer of News Corporation, unveiled The Daily — the industry’s first national daily news publication created from the ground up for iPad.

“New times demand new journalism,” said Mr. Murdoch. “So we built The Daily completely from scratch — on the most innovative device to come about in my time — the iPad.”

“The magic of great newspapers — and great blogs — lies in their serendipity and surprise, and the touch of a good editor,” continued Mr. Murdoch. “We’re going to bring that magic to The Daily — to inform people, to make them think, to help them engage in the great issues of the day. And as we continue to improve and evolve, we are going to use the best in new technology to push the boundaries of reporting.”

The Daily’s unique mix of text, photography, audio, video, information graphics, touch interactivity and real-time data and social feeds provides its editors with the ability to decide not only which stories are most important — but also the best format to deliver these stories to their readers.

John Hudson at The Atlantic with a round-up

Erick Schonfield at Tech Crunch:

A new edition will come out every day, with updates throughout the day. it will feature a carousel navigation that looks like Coverflow, an dinclude video and 360-degree photographs.

Since there are no trucks and no printing costs, The Daily will cost 14 cents a day or about $1 a week. The first two weeks are free, thanks to a sponsorship by Verizon. You will be able to download it live at noon ET.

Murdoch also revealed that the total cost to get the Daily up and running—the technology, the staff, everything—has been $30 million, and that operating costs are half a million dollars a week.

I asked Murdoch why he thinks it is better to charge a subscription versus gaining a larger audience via free downloads and selling that larger audience to advertisers, who are lining up anyway because their ads look so much better in an iPad app. “I think they will pay much less per thousand if it was free,” says Murdoch. “We feel this is better for advertisers and will draw a better class of advertisers at a better rate.”

Jesus Diaz at Gizmodo:

Of course, he seems really adamant about his project. His letter is full of Cupertinian hyperbole: “this pioneering digital venture, fully championed by Steve Jobs and the rest of his team at Apple, establishes an entirely new category of delivery and consumption.” An entire new category. It must be really magical. This fair and balanced quote, however, makes me think The Daily may be just another glorified reader with lots of video thrown in: “I’m convinced that what they’ve created is the most immersive and unique experience available – one that will resonate with our audiences everywhere and change the way news is viewed.”

Peter Kafka at Media Memo:

The Daily’s formal debut is in a few hours, at which point we’ll have no shortage of pro/con opinions about News Corp.’s new iPad newspaper.But until then, here are the reasons the Daily won’t work, followed by the reasons it will. They’re both from the same guy–Stifel Nicolaus analyst Jordan Rohan. From his note published yesterday:

CONS:

1. Consumer Acceptance Could Take Time: Nobody really knows the future of the iPad daily, and the official launch party is not “where the rubber meets the road” in terms of understanding consumer acceptance of such a new concept.

2. Hype or Reality?: Hype does not necessarily translate into market share, revenue, or cash flow.

3. Control: Apple tends to control its environment so tightly that there may be clashes down the road with apps offered by Yahoo!, Google, Facebook, AOL, Amazon, and a host of other Internet companies. This could reduce overall profit potential for iPad publishers.

4. Understanding the revenue model will be key. Online ad networks and other intermediaries could be left on the outside, looking in, if the iPad remains a premium offering with high CPMs. The subscription model is somewhat irrelevant unless it scales to support a vibrant advertising environment. We will have to wait and see on that key point.

PROS:

1. Product Differentiation: News Corp could marshal the resources of its newspaper, cable television, studio, and Internet divisions to differentiate the product from most other companies.

2. Apple is a powerful ally. The recent track record of product innovation and commercialization at Apple is unmatched. If Apple is willing to throw its weight behind this initiative, along with News Corp, then the chances of success are high.

3. Playing Offense: If News Corp can make an iPad daily work, then other media companies will begin to play offense as well. And that is generally a good thing for innovation, and ultimately for advertisers and marketers alike.

4. Makes More Sense than Wired for iPad: Mid last year, we attended a pre-launch event for Wired magazine’s iPad initiative, which Conde Naste marketed at a surprising $5 per copy. The product was beautiful, but results were mixed at best. And it was a monthly, not a daily, which implied that the frequency of visitation was much lower.

Rohan, by the way, is ultimately bullish on the Daily, and he was that way before he got a look at the thing at Rupert Murdoch’s apartment last night. Now he’s very, very bullish, but he’s been embargoed from talking about it until noon today.

Darrell Etherington at GigaOm:

Unlike many existing print and newspaper magazine conversion apps, The Daily seems to feature a lot of clickable and interactive elements. Web links will bring up pages in a built-in browser, and Twitter feeds are accessible from within the app. There’s also an in-app text and audio commenting system for greater reader interaction. The app will also be able to pull in breaking news using Twitter and other sources, so that it stays fresh throughout the day without undergoing the kind of massively frequent overhaul you see on blogs. It’ll be interesting to see how The Daily strikes this balance.

No back-issues will exist at launch, and users instead will have to save articles for later from within the app or retrieve them on the web via HTML. Plans for improved access to older content are in the works, but won’t be included at launch.

At launch today,  The Daily will be available only to customers shopping in the U.S. store, and will be free for the first two weeks. According to a leaked official memo published by Gizmodo (which was completely accurate regarding other details), News Corp. is planning to bring The Daily to international markets (and other tablets) in the coming months.

Apple VP of Internet Services Eddy Cue announced the inclusion of new in-app recurring subscription billing with “one click,” but didn’t offer any further details. Cue noted that an upcoming  (“soon” was the only timeline hinted at) Apple announcement would detail this new feature further, including implementation plans among other publishers.

Colby Hall at Mediaite:

There is no question that the partnership between Apple and News Corp. is a big story worth covering, as it received a lot of deserved attention months ago when it was announced. And yes, Rupert Murdoch is arguably the single most powerful media mogul (best evidenced by his place on the Power Grid); his enthusiasm and embracing of a new media platform (and pouring of $30 Million into its development) is a compelling and relevant story.

But the story unfolding in Egypt right now could not be more compelling, since it appears that the American ally (with huge strategic influence on the U.S. economy) is on the brink of complete and total destabilization. Ironically, the Murdoch-led press conference was introduced by Fox News’ Neil Cavuto, an individual who has repeatedly reported the relevance of the Egyptian uprising on the price of oil. The decision to go with The Daily press event over the revolution in Egypt seems odd at best.

Obviously, other news networks continue to air short, fluff pieces in between their Egypt coverage, and if Fox had relegated this to such a segment, clearly disclosing the relationship, then they’d be much less open to criticism. But this was neither short nor fluff.

In many ways this feels similar to Sunday night’s programming decision at MSNBC to air reruns of their Lockup series, while Fox News and CNN covered Egypt live. As we reported earlier, MSNBC was rewarded by getting the highest ratings of the night!

Clearly this event was planned well in advance of the upheaval in Egypt, and when two giant corporations like Apple and News Corp. partner, it is big news (particularly with regard to the future of media and news.) But the Fox News’ decision to forgo real news coverage in Egypt for the promotion of a new commercial information platform (from which they hope to profit) seems to be at best a perfectly ironic example of the state of media today.

Leave a comment

Filed under Mainstream, Technology

Shall I Compare Thee To A Snake, A Gorilla, A Jungle, Bananas, Sex…

Uri Friedman at The Atlantic with a round-up.

Paul Kedrosky:

Over the weekend I tried to buy a new dishwasher. Being the fine net-friendly fellow that I am, I  began Google-ing for information. And Google-ing. and Google-ing. As I tweeted frustratedly at the tend of the failed exercise, “To a first approximation, the entire web is spam when it comes to appliance reviews”.

This is, of course, merely a personal example of the drive-by damage done by keyword-driven content — material created to be consumed like info-krill by Google’s algorithms. Find some popular keywords that lead to traffic and transactions, wrap some anodyne and regularly-changing content around the keywords so Google doesn’t kick you out of search results, and watch the dollars roll in as Google steers you life-support systems connected to wallets, i.e, idiot humans.

Google has become a snake that too readily consumes its own keyword tail. Identify some words that show up in profitable searches — from appliances, to mesothelioma suits, to kayak lessons — churn out content cheaply and regularly, and you’re done. On the web, no-one knows you’re a content-grinder.

Charles Arthur at The Guardian:

The reason why this has happened is obvious: Google is the 900-pound gorilla of search, with around 90% of the market (excluding China and Russia), and there’s an entire industry which has grown up specifically around tickling the gorilla to make it happy and enrich the ticklers. I’ve not come across anyone who describes their job as “Bing results optimisation”, nor who puts that at the top of their business CV. Well, I’m sure there are people inside Microsoft whose job title is exactly that. But not outside it.

There are two lines of thought on what happens next.

1) Google comes back from the Christmas break newly determined to fix those damned scraping sites that don’t originate content, because it says in its own webmaster guidelines that “Google will take action against domains that try to rank more highly by just showing scraped or other auto-generated pages that don’t add any value to users.”

The only value those scrapers add, in fact, is to Google, because they display tons of AdSense ads. (Well, you can make a fair bet that they aren’t Bing’s equivalent.)

Wait – the scrapers that dominate the first search page, the place from which 89% of clicks come (for only 11% of clicks come from the last 990 results out of the first thousand, or at least did in 2006, a number that has probably only shifted down since then) all benefit Google financially, even while it sees market share improvements? That’s not quite the disincentive one might have hoped for that would make Google act.

2) People start not using Google, because its search is damn well broken and becoming more broken for stuff you care about by the day. This could happen. The question is whether it would be visible enough – that is, whether enough people would do it – that it would show up on Google’s radar and be made a priority.

Over at Hacker News, the suggestions in the comments echo the idea that Google’s search really isn’t cutting the mustard any more (“vertical search” is the new watchword). Which means that really, Google does need to implement method (1) above. It might not notice if a few geeks abandon it – but once the idea really gets hold (as it will through the links they offer and comments they drop) that Google’s search is broken, then the rout begins.

I haven’t been able to get a comment from Google on this, though I’m sure it would run something along the lines of “Google makes every effort to make its search results the best and takes seriously the issues raised here.”

Update: Google responded to this article: “Google works hard to preserve the quality of our index and we’re continuing to make improvements to this. Sites that abuse our quality guidelines or prove to be spam are removed from our index as fast as possible”. (For clarification, I didn’t initially contact Google as it was a public holiday when I wrote the original article. Matt Cutts did not respond to Twitter contact as he is on holiday, Google says.)

It would be crazy not to. The question is whether it really can make a difference.

Vivek Wadhwa at Tech Crunch:

This semester, my students at the School of Information at UC-Berkeley researched the VC system from the perspective of company founders. We prepared a detailed survey; randomly selected 500 companies from a venture database; and set out to contact the founders. Thanks to Reid Hoffman, we were able to get premium access to LinkedIn—which was very helpful and provided a wealth of information.  But some of the founders didn’t have LinkedIn accounts, and others didn’t respond to our LinkedIn “inmails”. So I instructed my students to use Google searches to research each founder’s work history, by year, and to track him or her down in that way.

But it turns out that you can’t easily do such searches in Google any more. Google has become a jungle: a tropical paradise for spammers and marketers. Almost every search takes you to websites that want you to click on links that make them money, or to sponsored sites that make Google money. There’s no way to do a meaningful chronological search.

We ended up using instead a web-search tool called Blekko. It’s a new technology and is far from perfect; but it is innovative and fills the vacuum of competition with Google (and Bing).

Blekko was founded in 2007 by Rich Skrenta, Tom Annau, Mike Markson, and a bunch of former Google and Yahoo engineers. Previously, Skrenta had built Topix and what has become Netscape’s Open Directory Project. For Blekko, his team has created a new distributed computing platform to crawl the web and create search indices. Blekko is backed by notable angels, including Ron Conway, Marc Andreessen, Jeff Clavier, and Mike Maples. It has received a total of $24 million in venture funding, including $14M from U.S. Venture Partners and CMEA capital.

In addition to providing regular search capabilities like Google’s, Blekko allows you to define what it calls “slashtags” and filter the information you retrieve according to your own criteria. Slashtags are mostly human-curated sets of websites built around a specific topic, such as health, finance, sports, tech, and colleges.  So if you are looking for information about swine flu, you can add “/health” to your query and search only the top 70 or so relevant health sites rather than tens of thousands spam sites.  Blekko crowdsources the editorial judgment for what should and should not be in a slashtag, as Wikipedia does.  One Blekko user created a slashtag for 2100 college websites.  So anyone can do a targeted search for all the schools offering courses in molecular biology, for example. Most searches are like this—they can be restricted to a few thousand relevant sites. The results become much more relevant and trustworthy when you can filter out all the garbage.

The feature that I’ve found most useful is the ability to order search results.  If you are doing searches by date, as my students were, Blekko allows you to add the slashtag “/date” to the end of your query and retrieve information in a chronological fashion. Google does provide an option to search within a date range, but these are the dates when website was indexed rather than created; which means the results are practically useless. Blekko makes an effort to index the page by the date on which it was actually created (by analyzing other information embedded in its HTML).  So if I want to search for articles that mention my name, I can do a regular search; sort the results chronologically; limit them to tech blog sites or to any blog sites for a particular year; and perhaps find any references related to the subject of economics. Try doing any of this in Google or Bing

Anil Dash:

Noticing a pattern here?

Paul Kedrosky, Dishwashers, and How Google Eats Its Own Tail:

Google has become a snake that too readily consumes its own keyword tail. Identify some words that show up in profitable searches — from appliances, to mesothelioma suits, to kayak lessons — churn out content cheaply and regularly, and you’re done. On the web, no-one knows you’re a content-grinder.

The result, however, is awful. Pages and pages of Google results that are just, for practical purposes, advertisements in the loose guise of articles, original or re-purposed. It hearkens back to the dark days of 1999, before Google arrived, when search had become largely useless, with results completely overwhelmed by spam and info-clutter.

Alan Patrick, On the increasing uselessness of Google:

The lead up to the Christmas and New Year holidays required researching a number of consumer goods to buy, which of course meant using Google to search for them and ratings reviews thereof. But this year it really hit home just how badly Google’s systems have been spammed, as typically anything on Page 1 of the search results was some form of SEO spam – most typically a site that doesn’t actually sell you anything, just points to other sites (often doing the same thing) while slipping you some Ads (no doubt sold as “relevant”).

Google is like a monoculture, and thus parasites have a major impact once they have adapted to it – especially if Google has “lost the war”. If search was more heterogenous, spamsites would find it more costly to scam every site. That is a very interesting argument against the level of Google market dominance.

And finally, Jeff Atwood, Trouble in the House of Google:

Throughout my investigation I had nagging doubts that we were seeing serious cracks in the algorithmic search foundations of the house that Google built. But I was afraid to write an article about it for fear I’d be claimed an incompetent kook. I wasn’t comfortable sharing that opinion widely, because we might be doing something obviously wrong. Which we tend to do frequently and often. Gravity can’t be wrong. We’re just clumsy … right?

I can’t help noticing that we’re not the only site to have serious problems with Google search results in the last few months. In fact, the drum beat of deteriorating Google search quality has been practically deafening of late.

From there, Jeff links to several more examples, including the ones I mentioned above. As Alan alludes to in his post, the threat here is that Google has become a monoculture, a threat I’ve written about many times.

Felix Salmon:

It turns out that the banana we all know and love — the Cavendish — is actually the second type of banana grown in enormous quantities and exported across Europe and North America. The first was the Gros Michel, which was wiped out by Tropical Race One; you might be saddened to hear that “to those who knew the Gros Michel the flavor of the Cavendish was lamentably bland.” Indeed, Chiquita was so sure that Americans would never switch to the Cavendish that they stuck with the Gros Michel for far too long, and lost dominance of the industry to Dole.

In both cases, the fact that the same species of banana is grown and eaten everywhere constitutes a serious tail risk, even if today’s desperate attempts to genetically modify a disease-resistant Cavendish bear fruit:

A new Cavendish banana still didn’t seem like a panacea. The cultivar may dominate the world’s banana export market, but, it turns out, eighty-seven per cent of bananas are eaten locally. In Africa and Asia, villagers grow such hetergeneous mixes in their back yards that no one disease can imperil them. Tropical Race Four, scientists now theorize, has existed in the soil for thousands of years. Banana companies needed only to enter Asia, as they did twenty years ago, and plant uniform fields of Cavendish in order to unleash the blight. A disease-resistant Cavendish would still mean a commercial monoculture, and who’s to say that one day Tropical Race Five won’t show up?

This is exactly what I was talking about a year ago, in my post about Dan Barber, world hunger, and locavorism, when I talked about how monocultures are naturally prone to disastrous outbreaks of disease, and how a much more heterogeneous system of eating a variety of locally-grown foods is much more robust and equally capable of feeding the planet.

[…]

The problems with monoculture aren’t purely agricultural, either. Anil Dash has a post up today about the decline of Google search quality, and diagnosing the problem as being that “Google has become a monoculture”; Alan Patrick quotes a commenter at Hacker News as saying that if search were more heterogeneous, spamsites would find it more costly to scam every site.

I’m not completely convinced that seeing large numbers of SEO sites atop search results for consumer goods is entirely a function of the fact that Google is a monoculture. My guess is that in fact what we’re seeing is simply the result of enormous numbers of SEO sites, all using slightly different methods of trying to game the Google algorithm. Even if only a small percentage of those SEO sites succeed, and even if they only succeed briefly, the result is still a first page of Google results dominated by SEO spam — a lose-lose proposition for everybody, but one which wouldn’t be solved by having heterogeneous algorithms: they would all simply have different SEO sites atop their various search-result pages.

But maybe if Google wasn’t a monoculture, there wouldn’t be quite as many SEO sites all trying to hit the jackpot of, however briefly, landing atop the Google search results. In general, monoculture is a bad and brittle thing — and that goes for search as much as it goes for bananas.

Brad DeLong

Paul Krugman:

Brad DeLong takes us to two articles on trouble with Google: basically, scammers and spammers are doing their best to game the search engine, and in the process making it less useful to the rest of us. And people are turning to other search engines that are less affected, precisely because they’re less pervasive and the scammers and spammers haven’t adapted to them.

This makes me think of sex.

If you follow evolutionary theory, you know that one big question is why sexual reproduction evolved — and why it persists, given the substantial costs involved. Why doesn’t nature just engage in cloning?

And the most persuasive answer, as I understand it, is defense against parasites. If each generation of an organism looks exactly like the last, parasites can steadily evolve to bypass the organism’s defenses — which is why yes, we’ll have no bananas once the fungus spreads to cloned plantations around the world. But scrambling the genes each generation makes the parasites’ job harder.

So the trouble with Google is that it’s a huge target, to which human parasites — scammers and spammers — are adapting.

I’m not quite sure what search-engine sex would involve. But Google apparently needs some.

Matthew Elshaw:

And that’s not all, there are a large number of other posts which share the same thoughts on Google’s declining search quality.

While the major problems with Google’s search quality appear to be the rise of content farms and review sites, some posts also mention a number of other grey hat SEO tactics like link buying and doorway domains that are still working for some sites.

With the number of posts on this topic, I don’t think it will be long before a Google representative steps in to clear the air. In the mean time, what do you think about Google’s search results? Have you seen a decline in quality in recent months?

Leave a comment

Filed under Technology

Get The Net!

Sara Jerome at The Hill:

After nearly a decade of battle, the Federal Communications Commission approved contentious net-neutrality regulations on Tuesday over the strong objections of two Republican commissioners.

The vote marks the first time the agency has created formal rules for Internet lines, fulfilling an Obama campaign promise to prevent phone and cable companies from exerting too much control over the Internet.

“As we stand here now, the freedom and openness of the Internet are unprotected … That will change once we vote to approve this strong and balanced order,” FCC Chairman Julius Genachowski said at a commission meeting on Tuesday.

Tuesday’s vote closes a long chapter for Genachowski, after he sped out of the gates last year promising strong net-neutrality protections.

He quickly drew the wrath of the telecommunications industry and the skepticism of both parties in Congress, while frustrating consumer groups as his promises became mired in delay.

Ryan Kim at Gigaom:

The Open Internet Coalition, which includes Google, Skype and others Internet companies said the order addresses some important issues in protecting an open Internet and providing some stable rules for the Internet ecosystem. But Markham Erickson, Executive Director of OIC said the order still does not go far enough in protecting the wireless Internet, which has great potential for consumers, innovators and the economy.

“The Commission should move to apply the same rules of the road to the entire Internet moving forward,” Erickson said in a statement. “We will continue to monitor the progress under this rule and work to ensure the FCC fulfills its responsibility to protect consumers’,  small businesses’ and nonprofits’ ability to fully access and enjoy the benefits of the open Internet.”

Christopher Libertelli, Skype’s senior director of government & regulatory affairs said the Internet communications company is generally pleased with the trade-offs in the FCC’s rules.

“On balance, this decision advances the goal of keeping the Internet an open and unencumbered medium for Skype users. Specifically, we support the Commission’s decision that it will not tolerate wireless carriers who arbitrarily block Skype on mobile devices. This decision protects a consumer’s entitlement to use Skype on their mobile devices and we look forward to delivering further innovation in this area.”

Free Press, a media advocacy group, called the new rules a squandered opportunity that was heavily influenced by the Internet service providers the FCC should be regulating. Rather than protect an open Internet, the new order will enable discrimination for the first time, said Free Press Managing Director Craig Aaron.

“These rules don’t do enough to stop the phone and cable companies from dividing the Internet into fast and slow lanes, and they fail to protect wireless users from discrimination. No longer can you get to the same Internet via your mobile device as you can via your laptop. The rules pave the way for AT&T to block your access to third-party applications and to require you to use its own preferred applications.”

Mike Wendy, director of MediaFreedom, a market-oriented media organization said the new rules were unwarranted regulation of the Internet, the result of an over-reaching agency. He said the order could undermine U.S. competitiveness if not overturned in the courts or by Congress.

“If not overturned… these new regulations will harm the roll out of Internet infrastructure and services. Moreover, they will take America backwards at a time when our economy needs every advantage it can get to provide jobs for Americans, and compete globally,” Wendy said in a blog post.

Alexia Tsotsis at Tech Crunch:

What was actually voted on today has still yet to be published, but according to reports it lays out two different frameworks for fixed broadband and mobile broadband traffic. In both cases carriers like Comcast or Verizon will need to provide transparency to customers and will be prohibited from blocking competing services such Google Voice or Skype.

The discrepancy between the way the two different services are handled and the precise meaning of “reasonable network management practices” is what has the opposition in a huff. Initial reports of the regulations describe them as explicitly forbidding providers to accept pay for unreasonable traffic prioritization in the case of broadband and offering no such protections in the case of mobile broadband.

If today’s vote has succeeded in anything it is in creating debate as to whether or not the FCC has ultimate authority to regulate Internet practices. Republicans have already started to make noise about blocking the regulations when a more Republican Congress takes over in January. McDowell has also hinted at potential blocks from courts “the F.C.C. has provocatively chartered a collision course with the legislative branch.”

This is not without precedent: A federal appeals court ruling against the FCC in April quashed the FCC’s authority as it attempted to enforce net neutrality principles against Comcast for discriminating against file sharing.

Nilay Patel at Engadget

Peter Suderman at Reason:

Genachowski’s remarks portrayed the rules as a moderate middle ground between the extremes. It was a decision driven not by ideology but the desire to “protect basic Internet values.” If it’s a middle ground, it’s a legally dubious one. Earlier this year, a federal court ruled that the FCC had no Congressionally granted authority to regulate network management. Congress hasn’t updated the agency’s authority over the Net since then, but the FCC is now saying that, well, it has the authority anyway. Genachowski’s team has come up with a different legal justification, and they’re betting that this time around they can convince a judge to buy it.

Still, Genachowski’s portrayal of the order may be half right: The FCC’s move on net neutrality is not really about ideology. It’s about authority: He’s not so much protecting values as expanding the FCC’s regulatory reach. According to Genachowski’s summary remarks, the new rules call for a prohibition on “unreasonable discrimination” by Internet Service Providers—with the FCC’s regulators, natch, in charge of determining what counts as unreasonable. In theory, this avoids the pitfalls that come with strict rules. But in practice, it gives the FCC the power to unilaterally and arbitrarily decide which network management innovations and practices are acceptable—and which ones aren’t.

It’s the tech-sector bureaucrat’s equivalent of declaring, Judge Dredd style, “I am the law!” Indeed, Genachowski has said before—and reiterated today—that the rules will finally give the FCC the authority to play “cop on the beat” for the Internet.

The comparison may not be quite as comforting as he seems to think. But it is telling: Genachowski may not be eager to tell the public exactly what the Internet’s new rules of the road are, but he’s mighty eager to have his agency enforce them.

Scarecrow at Firedoglake:

’ll leave to Tim Karr and others to describe the technical features and sell outs that have allowed the Western World’s Worst internet/broadband structure to become slower, more expensive and more discriminatory than services in other countries. Senator Al Franken gave an excellent speech, worth watching on the full range of policy issues.

It may help to have an analogous framework on how to think about what corporate capture of the internet and broadband service means, not just in terms of speed and coverage but in terms of content and pricing. It’s not just that our service is slower and we face monopoly pricing, it’s that a tiny handful of corporations are seizing control of what we’ll be allowed to watch and read.

Suppose that President Eisenhower had proposed we build an interstate highway system, but we’d allow only three or four large corporations to carve up and own all the main interconnections, determine the tolls and decide who got to drive on them during which hours. The corporations could also decide where the on/off ramps were, which communities they did or didn’t serve, where the routes went, depending on which provided better tax breaks.

And suppose these same companies owned a couple of auto companies, and they could decide whether cars and trucks made by their affiliate companies got better access, more lanes, higher speeds and lower tolls than cars/trucks sold by competitors.

Then suppose the Justice Department and the FTC did not think it their job to enforce the anti-trust laws of the United States, while the federal highway regulators did not believe they should have rules requiring open access, fair pricing, and non-discrimination.

Welcome to the forthcoming US policy on broadband/internet access.

Leave a comment

Filed under New Media, Technology

There Are No Happy Endings On Craigslist

David Murphy at PC World:

Craigslist was expected to have earned an estimated $36 million from advertising associated with its Adult Services section in 2010—at least, that was the case when we first reported the projections from Advanced Interactive Media in late April of this year.

You can now expect that number to drop significantly, as Craigslist has removed its Adult Services section for U.S. visitors. The move surely comes as a relief to the various entities that have been petitioning for Craigslist to shut down the section—including human rights groups and more than 17 attorneys general from states across the nation.

There’s no indication that Craigslist has removed its Adult Services section for good, however. Although links to the site are now eliminated when accessing the main Craigslist page from an IP addressed based in the United States, one can still pull up the page from other countries. There’s been no comment from any Craigslist spokespeople whatsoever—officially or otherwise—related to the matter.

Chris Matyszczyk at Cnet:

The section was originally entitled Erotic Services. Its name was changed to reflect a new discipline, as, under pressure from attorneys general, Craigslist declared it would manually screen every ad in its newly named Adult Services section.

It is arguable whether the content of this new section truly changed. Some would say it was adult business as usual.

(Credit: Screenshot: Chris Matyszczyk/CNET)

Recently, Craigslist founder Craig Newmark gave a troubling if spontaneous interview to CNN, in which he seemed unable to answer questions about whether the site was facilitating child prostitution. Then, instead of answering the specific charges, Craigslist CEO Jim Buckmaster took to the company’s blog to assail the CNN reporter’s methods.

Evan Hansen at Wired:

Craigslist has made numerous changes to its sex listings over the years to accommodate critics, changing its sex listings label from “erotic services” to “adult services,” imposing rules about the types of ads that can appear, and manually filtering ads using attorneys. But it has also fiercely defended its overall practices as ethical, and criticized censorship as a useless and hypocritical dodge.

When Craigslist was hit with a lawsuit by South Carolina Attorney General Henry McMaster in 2009, it struck back with a preemptive lawsuit of its own and won. In a blog post last month, Craigslist CEO Jim Buckmaster explained the company’s filtering policies in detail, pointing out its lawyers had rejected some 700,000 inappropriate ads to date, and suggested its methods could offer a model for the entire industry. He has also used the company’s blog to blast critics, most recently an “ambush” CNN video interview of Craigslist founder Craig Newmark.

Craiglist has a point: Given other sites on the web (and in print) serve the same types of ads without the same level of scrutiny, it seems politicians are making the pioneering, 15-year-old service an opportunistic scapegoat. Internet services may accelerate and exacerbate some social problems like prostitution, but they rarely cause them. The root of these issues — and their solutions — lie in the realm of public policy, not web sites and ham-handed web site filtering.

Frances Martel at Mediaite

Michael Arrington at TechCrunch:

Craigslist has fought back using little more than their blog and logic. And they’re right. Having prostitution up front and regulated, as Craigslist does, means less crime is associated with it. It’s not like prostitution, sometimes called the world’s oldest profession, was invented on the site.

The fact that eBay and others do exactly the same thing, but without human review and moderation, doesn’t seem to matter. Craigslist Sex is what scares the general population, and it’s what the press and the politicians will continue to use to get their hits and votes.

So the Craigslist Adult Section was removed. Is the world now a safer place?

Update: This only appears to affect U.S. sites, so if you’re looking for a happy ending in Saskatoon or the West Bank, have at it.

Mistermix:

After a few months of getting shit from AGs looking to make a name for themselves, Craigslist has replaced its adult services ads with a “Censored” bar.

Until they gave up, Craigslist was the only big site hosting adult ads that made a good-faith effort to keep exploitation out of their site. eBay owned a site that also posted erotic ads, made no effort to police it, and they simply blocked access from the US when the site was criticized.

Perhaps we’ll have an honest conversation about ending the prohibition of prostitution in a few more years, but this episode shows that we’re nowhere near ready to have it now.

Leave a comment

Filed under Families, Technology

Every Few Years, We’ve Got To Declare Something “Dead.” It’s In The Constitution Or Something.

John Hudson at The Atlantic with the round-up

Chris Anderson at Wired:

You wake up and check your email on your bedside iPad — that’s one app. During breakfast you browse Facebook, Twitter, and The New York Times — three more apps. On the way to the office, you listen to a podcast on your smartphone. Another app. At work, you scroll through RSS feeds in a reader and have Skype and IM conversations. More apps. At the end of the day, you come home, make dinner while listening to Pandora, play some games on Xbox Live, and watch a movie on Netflix’s streaming service.

You’ve spent the day on the Internet — but not on the Web. And you are not alone.

This is not a trivial distinction. Over the past few years, one of the most important shifts in the digital world has been the move from the wide-open Web to semiclosed platforms that use the Internet for transport but not the browser for display. It’s driven primarily by the rise of the iPhone model of mobile computing, and it’s a world Google can’t crawl, one where HTML doesn’t rule. And it’s the world that consumers are increasingly choosing, not because they’re rejecting the idea of the Web but because these dedicated platforms often just work better or fit better into their lives (the screen comes to them, they don’t have to go to the screen). The fact that it’s easier for companies to make money on these platforms only cements the trend. Producers and consumers agree: The Web is not the culmination of the digital revolution.

A decade ago, the ascent of the Web browser as the center of the computing world appeared inevitable. It seemed just a matter of time before the Web replaced PC application software and reduced operating systems to a “poorly debugged set of device drivers,” as Netscape cofounder Marc Andreessen famously said. First Java, then Flash, then Ajax, then HTML5 — increasingly interactive online code — promised to put all apps in the cloud and replace the desktop with the webtop. Open, free, and out of control.

But there has always been an alternative path, one that saw the Web as a worthy tool but not the whole toolkit. In 1997, Wired published a now-infamous “Push!” cover story, which suggested that it was time to “kiss your browser goodbye.” The argument then was that “push” technologies such as PointCast and Microsoft’s Active Desktop would create a “radical future of media beyond the Web.”

“Sure, we’ll always have Web pages. We still have postcards and telegrams, don’t we? But the center of interactive media — increasingly, the center of gravity of all media — is moving to a post-HTML environment,” we promised nearly a decade and half ago. The examples of the time were a bit silly — a “3-D furry-muckers VR space” and “headlines sent to a pager” — but the point was altogether prescient: a glimpse of the machine-to-machine future that would be less about browsing and more about getting.

Michael Wolff at Wired:

An amusing development in the past year or so — if you regard post-Soviet finance as amusing — is that Russian investor Yuri Milner has, bit by bit, amassed one of the most valuable stakes on the Internet: He’s got 10 percent of Facebook. He’s done this by undercutting traditional American VCs — the Kleiners and the Sequoias who would, in days past, insist on a special status in return for their early investment. Milner not only offers better terms than VC firms, he sees the world differently. The traditional VC has a portfolio of Web sites, expecting a few of them to be successes — a good metaphor for the Web itself, broad not deep, dependent on the connections between sites rather than any one, autonomous property. In an entirely different strategic model, the Russian is concentrating his bet on a unique power bloc. Not only is Facebook more than just another Web site, Milner says, but with 500 million users it’s “the largest Web site there has ever been, so large that it is not a Web site at all.”

According to Compete, a Web analytics company, the top 10 Web sites accounted for 31 percent of US pageviews in 2001, 40 percent in 2006, and about 75 percent in 2010. “Big sucks the traffic out of small,” Milner says. “In theory you can have a few very successful individuals controlling hundreds of millions of people. You can become big fast, and that favors the domination of strong people.”

Milner sounds more like a traditional media mogul than a Web entrepreneur. But that’s exactly the point. If we’re moving away from the open Web, it’s at least in part because of the rising dominance of businesspeople more inclined to think in the all-or-nothing terms of traditional media than in the come-one-come-all collectivist utopianism of the Web. This is not just natural maturation but in many ways the result of a competing idea — one that rejects the Web’s ethic, technology, and business models. The control the Web took from the vertically integrated, top-down media world can, with a little rethinking of the nature and the use of the Internet, be taken back.

This development — a familiar historical march, both feudal and corporate, in which the less powerful are sapped of their reason for being by the better resourced, organized, and efficient — is perhaps the rudest shock possible to the leveled, porous, low-barrier-to-entry ethos of the Internet Age. After all, this is a battle that seemed fought and won — not just toppling newspapers and music labels but also AOL and Prodigy and anyone who built a business on the idea that a curated experience would beat out the flexibility and freedom of the Web.

Matt Buchanan at Gizmodo:

Chris Anderson’s new Big Idea—that the open web is giving way to a mere transport system for closed or semiclosed platforms like Facebook or iPhone apps from the App Store—is not very new. In its current iPhone-y, app-y incarnation, it’s at least a couple of years old. Wired even participates in the very phenomenon it bemoans, with its very fancy iPad app. (Because it has to: “The assumption had been that once the market matured, big companies would be able to reverse the hollowing-out trend of analog dollars turning into digital pennies. Sadly that hasn’t been the case for most on the Web, and by the looks of it there’s no light at the end of that tunnel.”) And the general idea itself goes back even further—Wired proclaimed the browser was dead in 1997, as he points out.

It’s true that the open, free-for-all web is besieged, but in a lot of ways Anderson doesn’t mention, like the potential neutering of net neutrality principles or the ongoing bandwidth crimp that could hamper innovative-but-data-intensive services—and, in turn, push users toward the kind of boxed services (cable VOD or ISP preferred content) that has Anderson so nerve-wracked. Like Comcast giving preferred access to NBC’s content by not counting it toward your monthly data allowance (since Comcast owns half of NBC now), or Verizon speeding up YouTube over Vimeo. You can look at it as a hardware problem vs. a software problem—and if the hardware is screwed, so is the software.

Erick Schonfeld at TechCrunch:

These shifts happen in waves. First the browser took over everything, then developers wanted more options and moved to apps (desktop and mobile), but the browser will eventually absorb those features, and so the leapfrogging continues. The ubiquity of the browser overcomes most of its technical deficiencies. Even in mobile, people will become overwhelmed by apps and the browser will make a comeback.

Rob Beschizza at Boing Boing:

Wired uses this graph to illustrate Chris Anderson and Michael Wolff’s claim that the world wide web is “dead.”

ff_webrip_chart2.jpg

Their feature, The Web is Dead. Long Live the Internet, is live at Wired’s own website.

Without commenting on the article’s argument, I nonetheless found this graph immediately suspect, because it doesn’t account for the increase in internet traffic over the same period. The use of proportion of the total as the vertical axis instead of the actual total is a interesting editorial choice.

You can probably guess that total use increases so rapidly that the web is not declining at all. Perhaps you have something like this in mind:

graph2.jpg

In fact, between 1995 and 2006, the total amount of web traffic went from about 10 terabytes a month to 1,000,000 terabytes (or 1 exabyte). According to Cisco, the same source Wired used for its projections, total internet traffic rose then from about 1 exabyte to 7 exabytes between 2005 and 2010.

So with actual total traffic as the vertical axis, the graph would look more like this.

3.jpg

Clearly on its last legs!

Matthew Ingram at Gigaom:

As with some of his other popular writings, Anderson seems to be coming to this realization rather late in the game, and has resorted to a sensationalized headline to grab some attention. We at GigaOM (and plenty of others who cover the web and technology space) have been writing and talking about the rise of the app economy — and particularly the rise of mobile apps thanks to the iPhone, as well as the iPad and Google’s Android platform — for more than two years now. As Om has pointed out on a number of occasions, the success of Apple’s iPhone and application store has accelerated the evolution of the web from a free-for-all to a selection of specific apps for specific needs.

Om’s favorite comparison is to the real world of home appliances: we don’t just have a single all-purpose appliance — instead, we have toasters and coffee-makers and can-openers and other devices that perform specific tasks. So, too, we now have applications for maps, applications for photos, applications for reading books, and apps for video and location-based “check ins” and dozens of other things. That doesn’t mean the web is dead; it means that the web, and the way we use it, is evolving. Instead of wandering around on the web looking for interesting websites by using services such as Yahoo or AOL, we’re using task-specific devices in a sense.

Anderson is right in a technical sense when he says that the web is “just one of many applications that exist on the Internet, which uses the IP and TCP protocols to move packets around.” But he also gets it wrong when he conflates the demise of the web browser with the demise of the web itself. Plenty of applications are using web technologies such as HTTP and REST, just as web browsers do. In a sense, they’re like mini-browsers for discrete applications, and although it’s almost a footnote in the Wired piece, HTML5 has the potential to allow developers to create (as some already have) websites that look and feel and function exactly like apps do. (For more on that, read our recent GigaOM Pro piece on the potential of HTML5.) Where does that fit in the “web is dead” paradigm?

It’s also worth noting (as others have as well) that the chart Wired uses with its story is misleading, or at least the way it’s being portrayed is misleading. (It also has the wrong dates, according to TechCrunch.) It shows the amount of total U.S. Internet traffic that different types of content have accounted for over the last decade (as calculated by Cisco). At the far right-hand side of the graph, video is seen as making up a large proportion of that traffic, while something called “the web” makes up a much smaller proportion than it did in 1995. But this does little to prove Anderson’s thesis, since the bulk of video is still viewed using websites such as YouTube and Hulu — and the fact that we have a lot more video traffic than we used to isn’t exactly a revelation.

Choire Sicha at The Awl:

Between 2000 and 2010, Americans with Internet access went from 124 million to 230 million.

(The world at large, by the way, went from 393 million Internet users to 1.5 billion, but let’s keep the focus on America, right Wired? Because we’re so much more interesting and also we buy iPads.)

Rob Beschizza made a related point extremely well. He notes: “According to Cisco, the same source Wired used for its projections, total internet traffic rose then from about 1 exabyte to 7 exabytes between 2005 and 2010.”

So, just in terms of basic Internet-using population in any event, as the “web use” “declined” by half over the last ten years as a percentage of use accorded to Wired, the real world activity presumably, at the same time, “stayed constant due to the doubling of the Internet-user” in the U.S.

Except use of the web blew up far more than that.

There’s a number of other questions I have about these numbers, which are almost the only numbers in the piece, apart from a claim by Morgan Stanley that in five years, more people will use the Internet over mobile devices than PCs.

For instance: doesn’t this chart measure data usage as traffic? Would that perhaps be why the “video” section is so swollen?

Alexis Madrigal at The Atlantic:

The problem is Anderson’s assumption about the way technology works. Serious technology scholars long ago discarded the idea that tech was just a series of increasingly awesomer things that successively displace each other. Australian historian Carroll Pursell, in reviewing Imperial College London professor David Edgerton’s The Shock of the Old, summarized the academic thinking nicely:

An obsession with ‘innovation’ leads to a tidy timeline of progress, focusing on iconic machines, but an investigation of ‘technology in use’ reveals that some ‘things’ appear, disappear, and reappear…

Edgerton has the same flair for the flashy stat that Anderson does. For example, to illustrate the point that newer and older technologies happily coexist, he notes that the Germans used more horses in World War II than the British did in World War I. More prosaically, some of the electricity for your latest gadget was probably made in a power plant that’s decades old. Many ways to bind pieces of paper — staplers, binders, paper clips, etc — remain in common usage (“The Paperclip Is Dead!”). World War I pilots used to keep homing pigeons tucked inside their cockpits as communication tools (see above). People piloting drones and helicopters fight wars against people who use machetes and forty-year old Soviet machine guns; all these tools can kill effectively, and they all exist right now together.

But that’s not how Anderson presents technology in this article. Instead, technologies rise up and destroy each other. And there’s nothing you or I can do to change the course of these wars. This is the nature of technology and capitalism, and there is not much room for individual decisionmaking or social influence in the algorithm.

Ryan Tate at Gawker:

Where did this argument first appear? Funny you should ask!

  • Irony 1: Wired released its cover story package first to the Web, on Wired.com. You won’t find it in Wired‘s iPad edition, and it’s not out in print yet. The death of the web might be the “inevitable course of capitalism,” but it apparently pays better to deliver that news via a dying medium.
  • Irony 2: Revenue is up at Wired‘s profitable website this year, despite a fairly severe reduction in staff last year. Yet Anderson, who has no control over Wired.com, writes that most Web publishers haven’t been able to “reverse the hollowing-out trend of analog dollars turning into digital pennies… and by the looks of it there’s no light at the end of that tunnel .” That tunnel being the one Wired, itself, is not in, apparently.
  • Irony 3: At the same time, circulation — and thus revenue, almost surely — are down for Wired‘s iPad edition, which was approaching (and possibly even surpassing) 100,000 copies for the debut issue but has since fallen off — to less than a fourth of what it was, one source claims. However large or small the decline, it could certainly be corrected; dropping off from a big bang launch is common enough in print and online media alike.But Wired’s iPad tumble does raise the possibility that Anderson is speaking as much from his hopes as from his analysis when he writes, “We are choosing a new form of Quality of Service: custom applications that just work.” The iPad team belongs to Anderson, after all (unlike, again, the web team).
  • Irony 4: Isn’t this the guy who wrote a book called Free and noted, “You know this freaky land of free as the Web. A decade and a half into the great online experiment, the last debates over free versus pay online are ending?” Eh, maybe not so much; Anderson today writes, “Much as we love freedom and choice, we also love things that just work, reliably and seamlessly. And if we have to pay for what we love, well, that increasingly seems OK.”

To his credit, Anderson also runs a feature in which publishers Tim O’Reilly and John Battelle get the opportunity to basically tell the editor he’s nuts. (Battelle: “Splashing “The Death of the Web” on the cover might be, well, overstating the case just a wee bit.”) In the online package, Wired.com editor Evan Hansen does likewise (“the web is far too powerful to be replaced by an alternative that gives away so much of what developers and readers have come to love and expect”).

Like any provocative editor, in other words, Anderson has people talking. (See also this take from Rob Beschizza at BoingBoing and from blogging pioneer Dave Winer; TechMeme has more reaction.) Now we get to sit back and watch as the author/consultant/editor tries to explain why nearly the entire conversation about the Death of the Web is happening on the Seemingly Quite Alive Web. That should be, at the very least, entertaining.

1 Comment

Filed under New Media, Technology