Tag Archives: Scientific American

And Even In This, We Find The Simpsons Reference

David E. Sanger and Matthew L. Wald at NYT:

As the scale of Japan’s nuclear crisis begins to come to light, experts in Japan and the United States say the country is now facing a cascade of accumulating problems that suggest that radioactive releases of steam from the crippled plants could go on for weeks or even months.

he emergency flooding of stricken reactors with seawater and the resulting steam releases are a desperate step intended to avoid a much bigger problem: a full meltdown of the nuclear cores in reactors at the Fukushima Daiichi Nuclear Power Station. On Monday, an explosion blew the roof off the second reactor, not damaging the core, officials said, but presumably leaking more radiation.

Later Monday, the government said cooling systems at a third reactor had failed. The Kyodo news agency reported that the damaged fuel rods at the third reactor had been temporarily exposed, increasing the risk of overheating. Sea water was being channeled into the reactor to cover the rods, Kyodo reported.

So far, Japanese officials have said the melting of the nuclear cores in the two plants is assumed to be “partial,” and the amount of radioactivity measured outside the plants, though twice the level Japan considers safe, has been relatively modest.

But Pentagon officials reported Sunday that helicopters flying 60 miles from the plant picked up small amounts of radioactive particulates — still being analyzed, but presumed to include cesium-137 and iodine-121 — suggesting widening environmental contamination.


MUCH ADO ABOUT NOT MUCH: 7th Fleet repositions ships after contamination detected. “For per­spec­tive, the max­i­mum poten­tial radi­a­tion dose received by any ship’s force per­son­nel aboard the ship when it passed through the area was less than the radi­a­tion expo­sure received from about one month of expo­sure to nat­ural back­ground radi­a­tion from sources such as rocks, soil, and the sun.”

Ed Morrissey:

Still, if that’s the dose received 100 miles away after wind dispersal and dissipation, it’s small wonder that the Japanese are evacuating the area near the plant.  No nation has the history of radiation poisoning that Japan does, and one has to believe that this danger will loom the largest among the people even after the tsunami damage that killed thousands of people.  The government will face a great deal of scrutiny for years to come for its actions in these few days, and they appear to understand that.

David Kopel:

That’s the title of a post on the Morgsatlarge, reprinting a letter from Dr. Josef Oehman of MIT. According to his web page, his main research interest is “risk management in the value chain, with a special focus on lean product development.” Although he’s a business professor and not a nuclear scientist, his father worked in the German nuclear power industry, and the post provides a detailed and persuasive (at least to me) explanation of how the endangered Japanese nuclear power plants work, and why their multiple backup systems  ensure that there will be neither an explosion nor a catastophic release of radiation. The American cable TV channels, by the way, seem to be taking a much more sober approach than they did yesterday, when Wolf Blitzer was irresponsibly raising fear of “another Chernobyl.”

John Sullivan at ProPublica:

As engineers in Japan struggle to bring quake-damaged reactors under control [1], attention is turning to U.S. nuclear plants and their ability to withstand natural disasters.

Rep. Ed Markey, a Massachusetts Democrat who has spent years pushing the Nuclear Regulatory Commission toward stricter enforcement of its safety rules, has called for a reassessment. Several U.S. reactors lie on or near fault lines, and Markey wants to beef up standards for new and existing plants.

“This disaster serves to highlight both the fragility of nuclear power plants and the potential consequences associated with a radiological release caused by earthquake related damage,” Markey wrote NRC Chairman Gregory Jaczko in a March 11 letter [2].

Specifically, Markey raised questions about a reactor design the NRC is reviewing for new plants that has been criticized for seismic vulnerability. The NRC has yet to make a call on the AP1000 reactor [3], which is manufactured by Westinghouse. But according to Markey, a senior NRC engineer has said the reactor’s concrete shield building could shatter “like a glass cup” under heavy stress.

The New York Times reported last week [4] that the NRC has reviewed the concerns raised by the engineer, John Ma, and concluded that the design is sufficient without the upgrades Ma recommended. Westinghouse maintains that the reactor is safe [5].

Boiling water reactors [6], like the ones hit by the Japanese earthquake, are built like nested matroyshka [7] dolls.

The inner doll, which looks like a gigantic cocktail shaker and holds the radioactive uranium, is the heavy steel reactor vessel. It sits inside a concrete and steel dome called the containment. The reactor vessel is the primary defense against disaster — as long as the radiation stays inside everything is fine.

The worry is that a disaster could either damage the vessel itself or, more likely, damage equipment that used to control the uranium. If operators cannot circulate water through the vessel to cool the uranium it could overheat and burn into radioactive slag — a meltdown.

Steve Mirsky at Scientific American

Maggie Koerth-Baker at Boing Boing:

This morning, I got an email from a BoingBoing reader, who is one of the many people worried about the damaged nuclear reactors at Fukushima, Japan. In one sentence, he managed to get right to heart of a big problem lurking behind the headlines today: “The extent of my knowledge on nuclear power plants is pretty much limited to what I’ve seen on The Simpsons“.

For the vast majority of people, nuclear power is a black box technology. Radioactive stuff goes in. Electricity (and nuclear waste) comes out. Somewhere in there, we’re aware that explosions and meltdowns can happen. Ninety-nine percent of the time, that set of information is enough to get by on. But, then, an emergency like this happens and, suddenly, keeping up-to-date on the news feels like you’ve walked in on the middle of a movie. Nobody pauses to catch you up on all the stuff you missed.

As I write this, it’s still not clear how bad, or how big, the problems at the Fukushima Daiichi power plant will be. I don’t know enough to speculate on that. I’m not sure anyone does. But I can give you a clearer picture of what’s inside the black box. That way, whatever happens at Fukushima, you’ll understand why it’s happening, and what it means.

Leave a comment

Filed under Energy, Foreign Affairs

Phytoplankton Numbers Are Phalling

Lauren Morello at Scientific American:

The microscopic plants that form the foundation of the ocean’s food web are declining, reports a study published July 29 in Nature.

The tiny organisms, known as phytoplankton, also gobble up carbon dioxide to produce half the world’s oxygen output—equaling that of trees and plants on land.

But their numbers have dwindled since the dawn of the 20th century, with unknown consequences for ocean ecosystems and the planet’s carbon cycle.

Researchers at Canada’s Dalhousie University say the global population of phytoplankton has fallen about 40 percent since 1950. That translates to an annual drop of about 1 percent of the average plankton population between 1899 and 2008.

The scientists believe that rising sea surface temperatures are to blame.

Ed Yong at Discover:

Graduate student Daniel Boyce focused on some of oceans’ smallest but most important denizens – the phytoplankton. These tiny creatures are the basis of marine food webs, the foundations upon which these watery ecosystems are built. They produce around half of the Earth’s organic matter and much of its oxygen. And they are disappearing. With a set of data that stretches back 100 years, Boyce found that phytoplankton numbers have fallen by around 1% per year over the last century as the oceans have become warmer, and if anything, their decline is getting faster.  Our blue planet is becoming less green with every year.

Meanwhile, post-doc Derek Tittensor has taken a broader view, looking at the worldwide distributions of over 11,500 seagoing species in 13 groups, from mangroves and seagrasses, to sharks, squids, and corals. His super-census reveals three general trends – coastal species are concentrated around the western Pacific, while ocean-going ones are mostly found at temperate latitudes, in two wide bands on either side of the equator. And the only thing that affected the distribution of all of these groups was temperature.

Together, the results from the two studies hammer home a familiar message – warmer oceans will be very different places. Rising sea temperatures could “rearrange the global distribution of life in the ocean” and destabilise their food webs at their very root. None of this knowledge was easily won – it’s the result of decades of monitoring and data collection, resulting in millions of measurements.

Boyce’s study, for example, really began in 1865, when an Italian priest and astronomer called Father Pietro Angelo Secchi invented a device for measuring water clarity. His “Secchi disk” is fantastically simple – it’s a black-and-white circle that is lowered until the observer can’t see it any more. This depth reveals how transparent the water is, which is directly related to how much phytoplankton it contains. This simple method has been used since 1899. Boyce combined it with measurements of the pigment chlorophyll taken from research vessels, and satellite data from the last decade.

Boyce’s data revealed a very disturbing trend. Phytoplankton numbers have fallen across the world over the last century, particularly towards the poles and in the open oceans. The decline has accelerated in some places, and total numbers have fallen by around 40% since the 1950s. Only in a few places have phytoplankton populations risen. These include parts of the Indian Ocean and some coastal areas where industrial run-off fertilises the water, producing choking blooms of plankton.

On a yearly basis, the rise and fall of the phytoplankton depends on big climate events like the El Nino Southern Oscillation. But in the long-term, nothing predicted the numbers of phytoplankton better than the surface temperature of the seas. Phytoplankton need sunlight to grow, so they’re constrained to the upper layers of the ocean and depends on nutrients welling up from below. But warmer waters are less likely to mix in this way, which starves the phytoplankton and limits their growth.

Michael O’Hare:

What makes human life worth living? Content, obviously: news, art, music, conversation – social intercourse in all media.  What makes it possible?  Food and drink, broadly defined: fresh water and all the plant and animal products we eat and use.

This morning I came upon a paper in Nature whose abstract is as follows (emphasis added):

In the oceans, ubiquitous microscopic phototrophs (phytoplankton) account for approximately half the production of organic matter on Earth. Analyses of satellite-derived phytoplankton concentration (available since 1979) have suggested decadal-scale fluctuations linked to climate forcing, but the length of this record is insufficient to resolve longer-term trends. Here we combine available ocean transparency measurements and in situ chlorophyll observations to estimate the time dependence of phytoplankton biomass at local, regional and global scales since 1899.We observe declines in eight out of ten ocean regions, and estimate a global rate of decline of ~1% of the global median per year. Our analyses further reveal interannual to decadal phytoplankton fluctuations superimposed on long-term trends. These fluctuations are strongly correlated with basin-scale climate indices, whereas long-term declining trends are related to increasing sea surface temperatures. We conclude that global phytoplankton concentration has declined over the past century; this decline will need to be considered in future studies of marine ecosystems, geochemical cycling, ocean circulation and fisheries. (paywall)

This finding – and I’m trying hard not to hyperventilate here – is not too far down the scary scale from discovering a small inbound asteroid. This is the whole ocean we’re talking about: the earth’s production of organic material is going down half a percent per year.  Oddly, I did not come upon it in the New York Times, which seems not to have run the story at all.  The Washington Post, I found only after I searched, did run the AP story somewhere way below whatever passes for the fold in a web edition, but I didn’t see it there either.  I found it, through a Brazilian accumulator, here.

How can this be? Well, the world’s production of traditional news (not newsworthy events, writing about them) is down along with the plankton (and the menu items at your favorite seafood restaurant…remember when you could have haddock for dinner?).  Every grownup, quality-conscious outlet is putting out less stuff every day, in fewer column-inches on smaller pages (or in more vacuous hours on TV padded out with ephemera that a small crew in a truck can get some meaningless video of).  The new, lean, pathetic Times just didn’t have room for this one (or salary to pay an editor to stay on top of stuff), a story I can make a case was the most important news of the week (why the Globo happened to put it on page one is not clear (as did the São Paulo paper), but muito obrigado, a Sra. da Silva também!).  I guess I can stay informed if I go to six web pages in four languages every day, but who has time, and why is that better than the way things were before the content markets fell apart?  And how long will even that strategy work?

We can’t live without the ocean, every time we look at climate change it’s worse than we thought, and we can’t get back from the precipice, or even know how close it is, without news.

We are so f____ed.

Kevin Drum:

So, anyway, as temperatures rise the plankton die. As plankton die, they suck up less carbon dioxide, thus warming the earth further. Which causes more plankton to die. Rinse and repeat. Oh, and along the way, all the fish die too.

Or maybe not. But this sure seems like a risk that we should all be taking a whole lot more seriously than we are. Unfortunately, conservatives are busy pretending that misbehavior at East Anglia means that global warming is a hoax, the Chinese are too busy catching up with the Americans to take any of this seriously, and you and I are convinced that we can’t possibly afford a C-note increase in our electric bills as the price of taking action. As a result, maybe the oceans will die. Sorry about that, kids, but fixing it would have cost 2% of GDP and we decided you’d rather have that than have an ocean. You can thank us later.

Megan McArdle:

The die-off of most of the phytoplankton would be a huge catastrophe.  However, here are some reasons that we shouldn’t succumb to outright panic quite yet:

1.  It’s one paper.  I am not casting aspersions on the authors or their methodology, but the whole idea of science is that even the smartest people can be wrong.  As with other attempts to reconstruct past climate, they’re using a series of proxies for past events that have much weaker accuracy than the direct measurements we’re now using.  That doesn’t mean they’re wrong, but it does leave them more open to interpretation.

2.  All the carbon we’re burning used to be in the atmosphere.  Yet the planet supported life.  Indeed, the oil we’re burning comes from the compressed, decayed bodies of . . . phytoplankton.  This suggests that some number of phytoplankton should be able to survive high concentrations of the stuff.

3.  There are positive feedback effects, but also negative ones.  One of the things that drives me batty about environmentalists and journalists writing about climate change is the insistence that every single side effect will be negative. This is not really very likely, unless you think that every place on earth just happens to be at the very awesomest climate equilibrium possible as of 9:17 am this morning, or that global warming is some sort of malevolent god capable only of destruction.

Mind you, this is not an argument for letting it happen; I’m not a fan of tampering with large, complex systems that I don’t really understand, which is why I tend not to support much direct government intervention in the economy–and why I do, nonetheless, support a hefty carbon tax.

But there’s a certain tendency to ignore mitigating offsets, such as the fact that higher carbon concentrations make terrestrial plants grow more lushly, sucking up some of that extra carbon dioxide in the atmosphere.  At least, as long as we don’t turn them into biofuels, that is.  There’s also a tendency to ignore mitigation rather than reduction, on the grounds that emissions reduction is “easier”.  Well, I suppose it is easier if you assume away the political problems.  But no matter how hard I assume, I keep waking up in a world where we’ve made no meaningful progress on emissions reductions.  At this point, I’ve got more faith in America’s engineering talent than in her ability to conquer fierce political resistance to reductions at home and abroad.

Brad Plumer at TNR on McArdle:

She’s partly right. Not every side effect will be negative. Just this week, The New York Times ran a piece about how marmots will thrive in a hotter world. So, three cheers for marmots. But the bad news tends to far outweigh the good. As the IPCC concluded in 2007, “Costs and benefits of climate change for industry, settlement and society will vary widely by location and scale. In the aggregate, however, net effects will tend to be more negative the larger the change in climate.” No one’s ignoring the upsides. They’re just focused on the larger downsides. For instance, McArdle suggests that more CO2 in the air will boost plant growth, which in turn will help suck more carbon out of the air and ameliorate things somewhat. It might surprise her to learn that scientists are perfectly well aware of that fact. But recent modeling suggests that this effect will likely be offset by other plant-related factors—like changes in evaporation—and the net result will likely be more warming, not less.

One main point to note here is that, on the whole, global warming will be neutral for this round little rock adrift in the ether that we like to call Earth. You could even say this is an exciting time for Mother Nature. Big changes are afoot. Some species will thrive and many others will die. Evolution will proceed apace. There will still be some forms of life around even if the planet heats up by 5°C or 10°C. As McArdle rightly notes, there have been periods in the past, millions of years ago, when carbon concentrations in the atmosphere were even higher than today, and, to quote Jurassic Park, life found a way.

The problem here is for one very particular life form: people. As I wrote in this TNR piece on planetary boundaries, we big-brained hominids have enjoyed a relatively stable climate for the past 10,000 years—a geological period dubbed the Holocene. Sea levels have been kept in check. Temperatures have fluctuated around a narrow band. And that relative predictability has enabled us to stay rooted in one location, to set up farms and cities, to plan for the future. We’ve adapted very well to the planet we have, and we’ve grown quite used to it. Most of our infrastructure has been built under the impression that the planet will basically look the same tomorrow as it did yesterday. That means that wrenching shifts in our ecosystem run the risk of being extremely painful—in the same way a big disruption to our financial system was extremely painful.

The second problem is that we just don’t know what’s in store. By belching up millions of tons of greenhouse gases into the atmosphere, we’re running a massive science experiment on the planet, one that can’t really be reversed. Maybe this phytoplankton stuff is just a blip. Or maybe it’s part of an ominous trend that’s going to rearrange the face of the oceans as we know it—oceans we’ve come to rely on for our survival. That doesn’t strike me as a gamble worth taking.

Leave a comment

Filed under Environment, Science

So Not Only Are There Definitely Aliens, But They Are Stealing Our TVs

Chris McKay:

Recent results from the Cassini mission suggest that hydrogen and acetylene are depleted at the surface of Titan. Both results are still preliminary and the hydrogen loss in particular is the result of a computer calculation, and not a direct measurement. However the findings are interesting for astrobiology. Heather Smith and I, in a paper published 5 years ago (McKay and Smith, 2005) suggested that methane-based (rather than water-based) life – ie, organisms called methanogens — on Titan could consume hydrogen, acetylene, and ethane. The key conclusion of that paper (last line of the abstract) was “The results of the recent Huygens probe could indicate the presence of such life by anomalous depletions of acetylene and ethane as well as hydrogen at the surface.”

Now there seems to be evidence for all three of these on Titan. Clark et al. (2010, in press in JGR) are reporting depletions of acetylene at the surface. And it has been long appreciated that there is not as much ethane as expected on the surface of Titan. And now Strobel (2010, in press in Icarus) predicts a strong flux of hydrogen into the surface.

This is a still a long way from “evidence of life”. However, it is extremely interesting.

Andrew Moseman at Discover:

If there were life on the Saturnian moon of Titan, the thinking goes, it would have to inhabit pools of methane or ethane at a cool -300 degrees Fahrenheit, and without the aid of water. While scientists don’t know just what that life would look like, they can predict what effects such tiny microbes would have on Titan’s atmosphere. That’s why researchers from the Cassini mission are excited now: They’ve found signatures that match those expectations. It’s far from proof of life on Titan, but it leaves the door wide open to the possibility.In 2005, NASA’s Chris McKay put forth a possible scenario for life there: Critters could breathe the hydrogen gas that’s abundant on Titan, and consume a hydrocarbon called acetylene for energy. The first of two studies out recently, published in the journal Icarus, found that something—maybe life, but maybe something else—is using up the hydrogen that descends from Titan’s atmosphere to its surface:

“It’s as if you have a hose and you’re squirting hydrogen onto the ground, but it’s disappearing,” says Darrell Strobel, a Cassini interdisciplinary scientist based at Johns Hopkins University in Baltimore, Md., who authored a paper published in the journal Icarus [Popular Science].

Erring on the side of caution, the scientists suggest that life is but one explanation for this chemical oddity. Perhaps some unknown mineral on Titan acts as a catalyst to speed up the reaction of hydrogen and carbon to form methane, and that’s what accounts for the vanishing hydrogen. (Normally, the two wouldn’t combine fast enough under the cold conditions on Titan to account for the anomaly.) That would be pretty cool, though not as much of a jolt as Titanic life.

Nancy Atkinson at Universe Today:

Two papers released last week detailing oddities found on Titan have blown the top off the ‘jumping to conclusions’ meter, and following media reports of NASA finding alien life on Saturn‘s hazy moon, scientists are now trying to put a little reality back into the news. “Everyone: Calm down!” said Cassini imaging team leader Carolyn Porco on Twitter over the weekend. “It is by NO means certain that microbes are eating hydrogen on Titan. Non-bio explanations are still possible.” Porco also put out a statement on Monday saying such reports were “the unfortunate result of a knee-jerk rush to sensationalize an exciting but rather complex, nuanced and emotionally-charged issue.”

Astrobiologist Chris McKay told Universe Today that life on Titan is “certainly the most exciting, but it’s not the simplest explanation for all the data we’re seeing.”

McKay suggests everyone needs to take the Occam’s Razor approach, where the simplest theory that fits the facts of a problem is the one that should be selected.

The two papers suggest that hydrogen and acetylene are being depleted at the surface of Titan. The first paper by Darrell Strobel shows hydrogen molecules flowing down through Titan’s atmosphere and disappearing at the surface. This is a disparity between the hydrogen densities that flow down to the surface at a rate of about 10,000 trillion trillion hydrogen molecules per second, but none showing up at the surface.

“It’s as if you have a hose and you’re squirting hydrogen onto the ground, but it’s disappearing,” Strobel said. “I didn’t expect this result, because molecular hydrogen is extremely chemically inert in the atmosphere, very light and buoyant. It should ‘float’ to the top of the atmosphere and escape.”

The other paper (link not yet available) led by Roger Clark, a Cassini team scientist, maps hydrocarbons on Titan’s surface and finds a surprising lack of acetylene. Models of Titan’s upper atmosphere suggest a high level of acetylene in Titan’s lakes, as high as 1 percent by volume. But this study, using the Visual and Infrared Mapping Spectrometer (VIMS) aboard Cassini, found very little acetylene on Titan’s surface.

Of course, one explanation for both discoveries is that something on Titan is consuming the hydrogen and acetylene.

Even though both findings are important, McKay feels the crux of any possible life on Titan hinges on verifying Strobel’s discovery about the lack of hydrogen.

“To me, the whole thing hovers on this determination of whether there is this flux of hydrogen is real,” McKay said via phone. “The acetylene has been missing and the ethane has been missing, but that certainly doesn’t generate a lot of excitement, because how much is supposed to be there depends on how much is being made. There are a lot of uncertainties.”

Phil Plait at Discover:

Titan is a monster, the second biggest moon in the solar system at 5150 km (3200 miles) in diameter. If it weren’t orbiting Saturn, it would probably be considered a planet in its own right: it’s bigger than Mercury and Pluto. It has a thick atmosphere, made up of nitrogen, methane, and other molecules. It’s very cold, but it’s known that lakes, probably of liquid methane, exist on the surface.

Five years ago, McKay and other scientists pointed out that if methane-based life existed on Titan, it might be detectable through a surface depletion of ethane, hydrogen, and acetylene. New observations show that this is the case; there are lower amounts of these substances than the chemistry of Titan would indicate.

As McKay points out, “This is a still a long way from ‘evidence of life’. However, it is extremely interesting.”

Those are the basics. Go read McKay’s article for details. The point he makes is that the results are preliminary, may yet turn out to be wrong, if they’re right may have non-biological explanations, and we should not conclude biology is involved until we get a lot more evidence.

As far as the media goes, headlines get eyeballs and sell advertisements, of course. But in cases where the news is like this, news outlets should be particularly careful how they phrase things. They know how the public will react to certain phrases, and the phrase “evidence of life” is substantially less accurate and more likely to incite chatter than “evidence for possible life” — and the Telegraph’s technically accurate but seriously misleading “evidence ‘that alien life exists on Saturn’s moon’” is just asking for trouble.

The point is, when it comes to media outlets and big news like this, the phrase going through your head should be a variant of an old one, updated for this modern age:

“Don’t trust, and verify”.

John Matson at Scientific American

Maggie Koerth-Baker at Boing Boing:

This is the kind of research that easily sets hearts aflutter and space nerds to making high-pitched happy squealing sounds, so let’s knock out one basic thing right off the bat: Nobody has discovered alien life. We have not found E.T. This is only a test of the emergency high-pitched happy squealing system.

That said, it probably wouldn’t be remiss to clap your hands delightedly, like a little girl. As I said, nobody has found alien life, but they did find the sort of evidence that might suggest alien life is down there on the surface of Titan, waiting to be found. It’s a little like walking up to a house and finding the front door open, and, inside, a T.V. stand that’s missing a T.V. It’s reasonable to assume the house might have been burglarized, but there are also other plausible explanations and you don’t have enough evidence to know one way or the other.

Rod Dreher

Leave a comment

Filed under Science

Go Patent Yourself!

The Economist:

Since the decoding of the human genome, biotechnology companies have claimed that by matching a person’s genetic make-up with specialised treatments, they can tailor drugs to maximise benefits and minimise side effects. Alas, researchers have discovered that the link between a given person’s genetic make-up and specific diseases is much more complex than they had hoped. The tantalising vision remains out of reach.

A rare exception has been the success that Myriad Genetics, an American firm, has had with two genes called BRCA1 and BRCA2. Certain versions of these genes, it has been shown, are associated with a high risk of breast and ovarian cancer. The University of Utah has patented the genes and licenses them to Myriad. The firm uses that exclusivity to create expensive genetic tests for cancer risk which only it offers for sale (the patents and licensing conditions are different outside the United States).

The BRCA patents have long frustrated medical researchers, cancer lobbyists and legal activists. They claim that the firm’s grip on the two genes unlawfully stifles both innovation and basic science. Given the history of patent rulings in America, that has been a fringe argument—until now.

On March 29th the New York District Court made a ruling that, taken at face value, turns America’s approach to the patent protection of genes on its head. A coalition led by the American Civil Liberties Union (ACLU) had challenged the very basis of Myriad’s patents. The nub of the case was this question: “Are isolated human genes and the comparison of their sequences patentable things?”

Until now, the answer had been “Yes”. But Robert Sweet, the presiding judge, disagreed, at least as far as the BRCA genes are concerned. After weighing up Myriad’s arguments, he ruled: “It is concluded that DNA’s existence in an ‘isolated’ form alters neither this fundamental quality of DNA as it exists in the body nor the information it encodes. Therefore, the patents at issues directed to ‘isolated DNA’ containing sequences found in nature are unsustainable as a matter of law and are deemed unpatentable subject matter.” Mr Sweet reasoned that DNA represents the physical embodiment of biological information, and that such biological information is a natural phenomenon.

Genome Web:

The ACLU’s and PUBPAT’s lawsuit against Myriad Genetics and the University of Utah Research Foundation, which hold the patents on the BRCA genes, as well the U.S. Patent and Trademark Office (USPTO), charged that the challenged patents are illegal and restrict both scientific research and patients’ access to medical care, and that patents on human genes violate the First Amendment and patent law because genes are “products of nature.”

The specific patents that the ACLU had challenged are on the BRCA1 and BRCA2 genes. Mutations along the BRCA1 and 2 genes are responsible for most cases of hereditary breast and ovarian cancers. The patents granted to Myriad give the company the exclusive right to perform diagnostic tests on the BRCA1 and BRCA2 genes.

William L. Warren, partner at Sutherland Asbill & Brennan, believes this is a “poor decision that may have negative short-term implications for financing in the biotechnology sector, and hence the development of new diagnostics and therapeutics, until it is overturned by the U.S. Court of Appeals for the Federal Circuit in the next one to two years. Certainly, the sequencing of genes and disease-associated mutations for use in developing diagnostic probes and assays provides useful nonnaturally occurring subject matter that should qualify for patentability under the statute.

“While native genes in the body are originally products of nature, isolating portions of the DNA in order to perform a diagnosis transforms the DNA structurally and functionally into patentable subject matter,” he continues. “The isolated DNA has been markedly changed to become a useful product, even though it carries some of the same information as the native gene.

“Whether through the progress of scientific knowledge and techniques the isolation of such DNA fragments becomes routine or obvious is a separate question, which was not at issue in this case.”

Megan Carpentier at The Washington Independent

Ronald Bailey at Reason:

GenomeWeb quotes ACLU attorney Chris Hansen as saying:

“Today’s ruling is a victory for the free flow of ideas in scientific research. The human genome, like the structure of blood, air or water, was discovered, not created. There is an endless amount of information on genes that begs for further discovery, and gene patents put up unacceptable barriers to the free exchange of ideas.”

Hansen is making the argument that gene patents have created an anti-commons that is impeding important research. But is that so? I looked into the issue three years ago and could find little empirical support for the …

… concern that the over-proliferation of patents, instead of encouraging innovation, is stifling it. This argument achieved prominence in an influential 1998 article published in Science by two University of Michigan law professors, Michael A. Heller and Rebecca S. Eisenberg. Heller and Eisenberg worried that the privatization of biomedical research “promises to spur private investment but risks creating a tragedy of the anticommons through a proliferation of fragmented and overlapping intellectual property rights.”

By “anticommons,” they meant a situation in which the existence of a large number of intellectual property rights applicable to a single good or service unduly retards or even prevents its provision. The blockage to innovation would occur because of high transaction costs, the conflicting goals of various intellectual property owners, and cognitive biases in which owners overvalue their own patents, undervalue others’ patents, and reject reasonable offers.

As evidence for a biomedical anticommons, analysts regularly cite the high profile case of “probably the most hated diagnostics company,” Myriad Genetics.

As evidence against the existence of a research anti-commons, I cited a number of studies by the National Academy of Sciences and I further noted that …

… in 2006, Nature Biotechnology published a review (free registration required) of the academic literature on the existence of a research anticommons. The review concluded that “among academic biomedical researchers in the United States, only one percent report having had to delay a project and none having abandoned a project as a result of others’ patents, suggesting that neither anticommons nor restrictions on access were seriously limiting academic research.” Worryingly, the review noted there was evidence that secrecy was growing among academic researchers. However, patent issues do not seem to be fueling this secrecy. One study suggested that increased academic research secrecy arises chiefly from concerns about securing scientific priority (scientific competition) and the high cost and effort involved in sharing scientific materials and data.

In 2007, the American Association for the Advancement of Science (AAAS) released a report, International Intellectual Property Experiences: A Report of Four Countries, which surveyed thousands of scientists in the U.S., Germany, the U.K. and Japan to assess their experiences in acquiring, using, or creating intellectual property. The AAAS study found “very little evidence of an ‘anticommons problem.'” As Stephen Hansen, the director of the AAAS study, noted in a press release, “All four studies suggest that intellectual property rights had little negative impact on the practice of science.”

Perhaps there is newer and better evidence for a research anti-commons. I will look into it again and report back.

Daniel McCarthy at The American Conservative:

Biotech businesses and their scientists say the decision will stifle research, destroy incentives for product development, and grow government by leaving federally supported universities as the only institutions willing to undertake further genetic studies. None of this rings true. No doubt holding legal monopoly over a part of a human being is more lucrative for any firm than having to compete with other companies in developing biotechnology, but it is not necessarily best for patients. Other industries do just fine in terms of innovation, and much better in terms of cost control, without being able to patent their consumers.

I think this paragraph from the New York Times‘ story gets at the nub of the matter:

[The company] sells a test costing more than $3,000 that looks for mutations in the two genes to determine if a woman is at a high risk of getting breast cancer and ovarian cancer. Plaintiffs in the case had said Myriad’s monopoly on the test, conferred by the gene patents, kept prices high and prevented women from getting a confirmatory test from another laboratory.

Considering the amounts of money at stake in the principle, we’ll be hearing much more about this in months to come.

Josh Rosenau at Science Blogs:

This does not invalidate patents on organisms with modified genes or genomes, nor does it invalidate the act of modifying a gene in order to insert it into an organism. This does not, by my reading, set up Monsanto’s genetically modified Roundup Ready crops to lose patent protection, though it may free up competitors to develop similar genes, and may give farmers an easier way to protect themselves against a claim when Monsanto asserts patent violations because of crosspollination.

The court was asked to consider the chilling effect on research produced by patents for naturally occurring genes. Fortunately, the decision seems to have avoided that line of argument, as it opens a massive can of worms. In general, I’m inclined to oppose patents and copyright laws that restrict research, artistic development, medical care, or other humanitarian services. On the other hand, I don’t think that’s a call judges ought to be making. I’d rather see the laws themselves fixed when such chilling effects are seen. This judge’s ruling fired a shot across the bow of lawmakers about the abuses of genetic patents, and one hopes lawmakers will listen.

Given the sweeping victory on a summary judgment motion, the ACLU is understandably elated. “We are extremely gratified by this groundbreaking decision,” said Sandra Park, staff attorney with the ACLU Women’s Rights Project. “This is the beginning of the end to patents that restrict women’s access to their own genetic information and interfere with their medical care.” We can hope so. The appeals are inevitable, and are headed toward a notably pro-corporate and anti-woman Supreme Court, so there’s no guarantee that this ruling will hold up, but it’s a good first step.

As John Ball, executive vice president of the American Society for Clinical Pathology put it: “It’s good for patients and patient care, it’s good for science and scientists. It really opens up things.”

Katherine Harmon at Scientific American

Ashby Jones at WSJ Law Blog:

Peter Meldrum, Myriad’s chief executive, said the company will appeal. “I don’t believe that the final outcome of this litigation will have a material impact on Myriad’s operations,” he said. “We have 23 patents relating to BRCA genes, and this litigation only involves seven of those 23 patents.

Leave a comment

Filed under Health Care, Science, The Constitution

Criminals Everywhere Break Out The Purell

Smriti Rao at Discover:

If you thought that fingerprints or DNA fragments were the only bits of forensic evidence that could pin you to a scene of a crime, then think again. Researchers at the University of Colorado, Boulder have found preliminary evidence suggesting that you can be identified from the unique mix of bacteria that lives on you.

Each person, they say, is a teeming petri dish of bacteria, but the composition varies from person to person. Every place a person goes and each thing he touches is smudged with his unique “microbial fingerprint.” The bacterial mixes are so specific to individuals that researchers found that they could pair up individual computer keyboards with their owners–just by matching the bacteria found on the keyboard to the bacteria found on the person’s fingertips. Describing their findings in the journal Proceedings of the National Academy of Sciences, scientists write that that if this bacterial fingerprint technique is refined, it could one day help in forensic investigations.

Alla Katsnelson at MIT Technology Review:

The approach, when developed more fully, could potentially provide information where existing forensic techniques fall short, says Martin Blaser, a professor of medicine and microbiology at New York University. “When you just swab the skin, you get at least 100 times more microbial DNA than human DNA,” he says, so less material could give investigators a stronger signal.

Blaser also notes that fingerprints can’t be accurately read if they’re smudged, while a smudged print could still contain enough microbes to analyze. “The microbiome is us–it’s just another form of fingerprint, just like genomic DNA is us,” says Blaser, who wrote an accompanying commentary to the study, both published in The Proceedings of the National Academy of Sciences.

So far, though, a lot of questions remain about how accurate the technique can become. “We did these studies as a proof of concept,” says Fierer. “Now we need to do the hard work.”

For one thing, it is unclear whether an individual’s microbial signature could be retrieved if another person has touched the object being sampled. Another open question is just how stable an individual’s microbiome truly is. Antibiotics, for example, change an individual’s bacterial profile, although nobody knows for how long. A key step to developing the needed level of confidence, says Fierer, will be to expand the database of microbial communities found on individuals’ hands. Being able to compare a profile to a large number of other profiles will provide a baseline for extracting the truly individual elements.

However, says Relman, “the very reason that makes it more complex gives it all kinds of value that DNA will never have.” For example, he says, a person’s microbiota can reveal not just his or her identity–it can also give clues as to what that individual tends to eat, for example, or where he or she works or lives, so researchers could determine how the types of microbes carried on the body are dependent on such factors.

“I think this is the beginning of the process,” Relman says. “But we are going to need a lot more sources of the variation before we can still see the individual shining through.”

Ed Yong at Science Blogs:

Fierer swabbed the keys of three computer keyboards as well as the fingers of their owners. By sequencing the DNA of the bacteria he found, he showed that the communities on keys and fingertips are a close match. They could be used to tell the three people apart, even though two of them share the same office.

All of these samples were taken an hour or so after the owners had last typed on their keys, but Fierer found that leftover bacteria can stand longer tests of time. After swabbing the skins of two volunteers, he found that two weeks later, the bacteria were still informative and the communities were unchanged, even if the swabs were kept in open containers at room temperature.

So far, that’s not particularly impressive. Three is a vanishingly small number if we’re seriously talking about applications in forensics. So for his next trick, Fierer sequenced bacteria from nine computer mice. He compared them to sequences from bacteria on the mouse owners’ hands and to a database of similar sequences from 270 people who had never touched the devices. In every case, the mouse microbes were significantly more similar to the communities on their owners’ hands than to those from the other 278 people.

Even after this promising result, Fierer is very measured about the prospects for this technique. Like other widely used forensic tools, identification by skin bacteria would need to undergo a lot of development and refinement before it could actually be used in investigations. This is just a ‘proof-of-principle’, an inkling that this technique is worth more research.

Would it work on a wide variety of surfaces, on objects that aren’t touched as often as keyboards or mice, or on those that are touched by different parts of the skin? Could certain bacteria give away their owner’s identity more than others? Would a database of hand bacteria be useful? And how would you deal with objects that had been touched by several people?

If this work pans out, Fierer suggests that skin bacteria could be used to provide independent confirmation for other lines of evidence such as DNA or fingerprint analyses. After all, the bacteria in and on your body outnumber your cells by a factor of 10, and their genes outnumber yours by a factor of 100. It’s possible that the genes of these residents might be more useful in establishing our identities than our own genes.

But Foran is much more sceptical. He says that Fierer uses the word “match” throughout his paper but the word has a very specific meaning in forensics. “[A match] means that there are absolutely no incongruities between the data produced from a questioned sample and those from a known sample,” he explains. That’s very different to saying that a bacterial sample is more similar to those from person A than person B. “Perhaps it can be said it seems more likely person A touched an object than did person B, however we definitely cannot say person A did touch the object.”

Karen Hopkin at Scientific American

Clay Dillow at Popular Science:

As it turns out, even the most obsessive-compulsive among us carry about 150 species of bacteria around on our hands, and those bacteria in turn carry a genome unique to that person. Those bacteria could potentially become a damning forensic tool at crime scenes, allowing investigators to gather DNA information unique to a perpetrator even without recovering any of that person’s actual DNA.

But aspiring villains need not worry about being bacterially identified anytime soon. As is, the process is only 70-90 percent accurate, a margin of error too wide for even the most kangaroo courts. There are still a lot of questions to be answered as well: if more than one person has touched a piece of evidence, will the microbial profile be compromised? Is the microbiome stable enough to be used as an identifier (since, for instance, taking antibiotics can alter one’s bacterial profile)? Can criminals intentionally alter their bacterial profiles to throw investigators off the trail?

Lindsay Beyerstein

Matthew Yglesias:

I certainly hope more testing is done, because this seems like a technology that comes fully loaded with the possibility of serious mathematical errors. For example, suppose I find a dead body, a murder weapon, and on the weapon traces of bacterial DNA. Then I pluck a random person off the streets of Washington DC and the bacteria on his finger matches according to a test that’s 90 percent reliable.

A lot of people are going to read that as indicating that there’s a 90 percent chance that the random person is a murderer.

In reality, there are about 600,000 people in Washington DC and if you submitted them all to a 90 percent accurate bacterial DNA analysis, you’d wind up with 60,000 false positives. And since there’s only one gunman, the odds that your random person is the killer aren’t 9 in 10, they’re 1 in 60,000—the guy is almost certainly innocent. And improving the test to 99 percent accuracy doesn’t help much. But people make this mistake all the time, whether it comes to “data mining” to find terrorists or giving mammograms to young women with no risk factors. Even tests that are fairly reliable in a statistical sense need to be used very carefully.

Leave a comment

Filed under Crime, Science

Corot-7b, A Friendly Place For You And Me

Rocky Planet

Image: European Southern Observatory via the AP

Eliza Strickland at Discover Magazine:

Astronomers have conclusive evidence that a planet spotted in a star system 500 light years away is rocky and solid, just like Earth. Scientists have long figured that if life begins on a planet, it needs a solid surface to rest on, so finding one elsewhere is a big deal. “We basically live on a rock ourselves,” said co-discoverer Artie Hartzes…. “It’s as close to something like the Earth that we’ve found so far. It’s just a little too close to its sun” [AP].

Paul Sutherland in Scientific American:

It is less than twice the diameter of our own planet and has a similar density.

But there the resemblance ends. Corot-7b lies so close to its own sun that its surface must be like a vision of hell. Temperatures soar above 2,000 degrees on its day side and sink to minus 200 degrees on the night side.

It means the surface could be covered with molten lava or boiling oceans and it certainly could not hold any form of life as we know it.

Corot-7b is 23 times closer to its parent star than inner planet Mercury is to our own sun, and it zips around it at 750,000 km per hour making its year – the time it takes to complete one orbit – just 3 days 17 hours long.

Didier Queloz, leader of the European team that made the observations from the European Southern Observatory in Chile, said: “This is science at its thrilling and amazing best. We did everything we could to learn what the object discovered by the CoRoT satellite looks like and we found a unique system.”

Ron Cowen at Science News:

“This is truly the first rocky world beyond the solar system, and we know there’s more to come,” comments theorist Sara Seager of MIT. “This is a day we’ve been waiting for, for a long time.” The new find, along with about a dozen other known heavyweight versions of Earth, may help astronomers understand how terrestrial planets form around other stars and how common they are. Although planet hunters ultimately hope to find Earthlike planets in life-friendly orbits, for now scientists are happy to settle for discovering even uninhabitable analogs of Earth.
In February, Queloz’s team announced it had found the planet — the smallest extrasolar planet yet known, with a diameter of about 1.8 times the diameter of Earth. The scientists were able to pin down the size of the planet because the orb periodically passes in front of its parent star as seen from Earth, blocking a tiny amount of starlight. These passages, or transits, were recorded by the COROT satellite (SN: 2/28/09, p. 9).
But at that time, the scientists had only a rough estimate of the mass of the planet, ranging between five and 11 times the mass of Earth. Since then, the team has more accurately measured the tug of the tiny planet on its parent star using the HARPS spectrograph in La Silla, Chile. The team now finds that the planet has a mass about five times that of Earth.
The new mass measurement, in combination with the diameter, reveals that the planet has an average density of about 5.6 grams per cubic centimeter, almost identical to that of Earth.
“This mostly likely means that it has to be a rocky planet,” comments Alan Boss of the Carnegie Institution for Science in Washington, D.C. “This is a big deal.”

Clara Moskowitz at LiveScience:

Finding a rocky planet with an Earth-like density takes us one step closer to discovering another planet similar to our own. A twin-Earth beyond the solar system could offer the best chance of finding life elsewhere in the universe, scientists say.

Although CoRoT-7b’s lack of liquid water means it’s unlikely to host life, the planet’s discovery is still a promising sign. CoRoT and NASA’s Kepler space observatory are both up there as you read this, seeking such a discovery.

“We are searching for any kind of exoplanets,” Moutou said. “We’re trying not to be biased by our own system, but of course we would be very interested to find a planet where life could develop. This one is not habitable, but some future planets of this kind could allow life to develop. This is our long-time goal, to find an analog to Earth.”

The research team, led by Didier Queloz of the Geneva Observatory in Switzerland, described the results in a paper to be published in the Oct. 22 issue of the journal Astronomy and Astrophysics.

Hadley Leggett at Wired News

Leave a comment

Filed under Science

See Spot Run, See Spot Fetch, See Spot On Jupiter, That Dog Must Be Dead

A couple weeks ago, Australian amateur astronomer Anthony Wesley found what’s pictured above: a new spot on Jupiter

His report:

When I came back to the scope at about 12:40am I noticed a dark spot rotating into view in Jupiters south polar region started to get curious. When first seen close to the limb (and in poor conditions) it was only a vaguely dark spot, I thouht likely to be just a normal dark polar storm. However as it rotated further into view, and the conditions improved I suddenly realised that it wasn’t just dark, it was black in all channels, meaning it was truly a black spot.

My next thought was that it must be either a dark moon (like Callisto) or a moon shadow, but it was in the wrong place and the wrong size. Also I’d noticed it was moving too slow to be a moon or shadow. As far as I could see it was rotating in sync with a nearby white oval storm that I was very familiar with – this could only mean that the back feature was at the cloud level and not a projected shadow from a moon. I started to get excited.

It took another 15 minutes to really believe that I was seeing something new – I’d imaged that exact region only 2 days earlier and checking back to that image showed no sign of any anomalous black spot.

Now I was caught between a rock and a hard place – I wanted to keep imaging but also I was aware of the importance of alerting others to this possible new event. Could it actually be an impact mark on Jupiter? I had no real idea, and the odds on that happening were so small as to be laughable, but I was really struggling to see any other possibility given the location of the mark. If it really was an impact mark then I had to start telling people, and quickly. In the end I imaged for another 30 minutes only because the conditions were slowly improving and each capture was giving a slightly better image than the last.

Eventually I stopped imaging and went up to the house to start emailing people, with this image above processed as quick and dirty as possible just to have something to show.

More images will come along from me and many other people in the next few days.

Scientific American:

He continued to photograph the planet, then returned to his house to e-mail others about what he finally concluded had to be a scar from a recent impact. Leigh Fletcher, a postdoctoral astronomer at NASA’s Jet Propulsion Laboratory in Pasadena, Calif., happened to be part of a team with observing time on the NASA Infrared Telescope atop Mauna Kea in Hawaii when he and his colleagues got word of Wesley’s unusual sighting. What is more, Fletcher’s group was already going to train the three-meter telescope on Jupiter to observe its storms, working remotely from Pasadena.

“You can imagine the scene: We’re all extremely excited, crammed around the computer screen to see those first images from the telescope facility,” Fletcher says. “And there it was: an extremely bright feature on the southern hemisphere of Jupiter.” It looked just like a medium-size impact from Shoemaker-Levy 9, Fletcher says, confirming Wesley’s assessment.

Paul Kalas, an astronomer at the University of California, Berkeley, also had observing time booked on Mauna Kea, at the 10-meter Keck 2 telescope, when he and his colleagues read about Wesley’s discovery on the blog of Kalas’s Berkeley colleague Franck Marchis. Kalas and his team were using Keck to look for the exoplanet Fomalhaut b, some 25 light-years away. (Fomalhaut b was one of the first extrasolar planets whose orbit was confirmed with photographic evidence in November.)

Kalas and his colleagues took about 90 minutes away from their Fomalhaut observations to check on Wesley’s purported find. What they saw was an unmistakable spot glowing brightly in the infrared where something had punched through Jupiter’s outer layers. “We agreed with the amateur astronomer that this was an impact event,” Kalas says.

Lonnie Morgan at Wired with an interview with Wesley:

GD: How does it feel to be the first to see the Jupiter event? What were your first “out loud” words?

It still feels unbelievable. In hindsight the part of this that I feel happiest about was the fast reporting to alert others. The observing part came about naturally, I was just in the right place at the right time. But, the reporting required action on my part and a bit of a gamble that I was doing the right thing. At the time I wasn’t sure if I was just going to end up looking stupid but I was willing to take the chance.

First words out loud? Almost certainly not printable! I remember looking at this dark spot in the live feed from the camera and slowly coming around from puzzlement to disbelief and then to tremendous excitement, all in the space of about 15 minutes.


Universe Today:

“It’s significant that in each of the last 3 years amateurs have made the initial discoveries of new features in the Jovian atmosphere, the colour change of the previously white Oval BA to red in 2007 by Chris Go of the Philippines, the formation of another (smaller) red spot last year by myself, and then this event in 2009. In all cases the amateur work was followed up with imagery from Hubble and other major telescopes.”

This new impact occurred exactly 15 years after the first impacts by the comet Shoemaker-Levy 9, and as the celebrations of the Apollo 11 moon landings are taking place.

Glenn Orton, a scientist at JPL and his team of astronomers kicked into gear early in Monday morning and haven’t stopped tracking the planet. They are downloading data now and are working to get additional observing time on this and other telescopes.

“We were extremely lucky to be seeing Jupiter at exactly the right time, the right hour, the right side of Jupiter to witness the event. We couldn’t have planned it better,” he said.

Andrew Moseman in Popular Mechanics:

So far, researchers have narrowed down the direction and size of the impactor, but one big question remains: Just what kind of object was it? To answer this question, the researchers again will turn to Shoemaker-Levy 9, says Glenn Orton, a senior research scientist at JPL. The 1994 impacts left traces of hydrogen cyanide, carbon monoxide and other chemicals in Jupiter’s atmosphere. Now Orton is using spectroscopic analysis, which determines the composition of Jupiter’s atmosphere by the light it gives off, to see if the 2009 impactor left a similar signature, which would indicate it was probably a comet as well. Orton says the best explanation is a comet or other icy body–nearly all the objects roaming that area of the solar system are icy–and finding oxygen traces would just about cinch the case. But making these observations through our own oxygen-rich atmosphere is a difficult task, he says, so the answer may take some time.

While questions remain, astronomers are carrying on their sleepless race to record all the information they can from Jupiter’s new scar before it’s too late. “We have no idea how long it will last,” de Pater says. Orton says he’s run himself and his grad students ragged as he guided observations in Chile by phone from his California office until 1 am and then started again at 7 am, when Jupiter’s scar became visible to the telescopes in Hawaii. “I’m exhausted,” he says. Others are joining the party too. Late last week, the Hubble Space Telescope’s brand-new camera, which space shuttle astronauts installed in May, snapped the sharpest visible-light photo yet taken of the impact site. It’ll take weeks and months to unravel all the data, Gurwell says, but for now astronomers’ priority is to gather all the evidence they possibly can. “We’ll shoot first and ask questions later,” he says.

Sky and Telescope Magazine:

And on the night of August 3-4, Jupiter will cover a 6th-magnitude star — a once-in-a-lifetime occurrence for most locations on Earth. The event occurs roughly from 22:53 Universal Time on August 3rd to 1:00 UT August 4th, varying slightly depending on your location. It happens at prime time for stargazers in Europe and Africa, and is also readily visible in the Middle East and Brazil.

[…] But by amazing good luck, 45 Cap will be masquerading as a fifth moon during a particularly eventful period for Jupiter’s Galilean moons. So weather permitting, every telescope owner on Earth will have a chance to see many fascinating events during the days before and after the occultation.

All of these events should be visible through small telescopes if the atmosphere is very steady, but extra aperture and high magnification will improve the views greatly. Try our Javascript utility or our PDF table for details of the moons’ interactions with Jupiter. And download a PDF article from the July 2009 issue of Sky & Telescope to see the details of the “mutual events” between the moons.

Aug. 1-2, 10:19 p.m. to 1:48 a.m. PDT, 1:19 to 4:48 a.m. EDT: The first sequence is visible across the Americas. It starts with just three moons visible (not counting 45 Cap), because Ganymede is behind Jupiter. During this period Europa and its shadow pass over Jupiter, Ganymede reappears, and Io disappers.

Aug. 2-3, 9:49 p.m. to 12:25 a.m. PDT, 12:49 to 3:25 a.m. EDT: Again visible across the Americas, a normal transit of Io and its shadow across Jupiter — fairly common but spectacular nonetheless.

Aug. 2, 15:55 to 16:32 UT: Best in eastern Asia and Australia. Io casts its shadow on Europa from 15:55 to 16:02 UT, dimming it 50%. Then Io passes in front of Europa from 16:26 to 16:36 UT, blocking 74% of its surface. Meanwhile, 45 Cap is just 30″ to their south.

Aug. 3-4, 22:53 to 4:39 UT, Jupiter rise to 12:39 a.m. EDT: This is the big event. The occultation of 45 Cap is best observed from Europe and Africa, but as you can see in the diagram above, Europa disappears into Jupiter’s shadow while the star is hidden, and Io disappears shortly after the star reappears. By 4:39 UT (12:39 a.m. EDT or 9:39 PDT), all three have reappeared from behind Jupiter, but they’re still spectacularly close to Jupiter and each other.

Aug. 4, 21:47 to 22:57 UT: In most of Europe and Africa, and much of Asia, Ganymede casts its shadow on Europa, diminishing its light 94%, from 21:47 to 21:59 UT. Then Ganymede clips the edge of Europa from 22:48 to 22:57 UT.

Aug. 4-5, 23:18 to 1:51 UT: In Europe and Africa, a normal transit of Io and its shadow.

Aug. 5, 14:38 to 23:05 UT: A very long sequence of events, best viewed in the Middle East and eastern Africa, but with parts visible across Eurasia and Oceania. A transit of Ganymede and its shadow overlapping a transit of Europa and its shadow, with Io disappearing and reappearing toward the end of the sequence.

Leave a comment

Filed under Science