lunes, 30 de agosto de 2010
Seems like an easy one to answer: an asteroid around six miles wide slammed into the Yucatan Peninsula. Continent-wide firestorms, planet-enshrouding dust cloud, massive plant death, toxic ozone, carbon monoxide poisoning ... and that's it: one resounding mass extinction all wrapped up in a pretty, hellish package and explained by a big hole in southeastern Mexico, right?
Explore the asteroid scenario by watching "Last Day of the Dinosaurs" on Sunday at 9 p.m. ET/PT on the Discovery Channel.
Well, the more scientists look, the more complicated the answer becomes. For starters, there were a series of truly enormous volcanic eruptions in what is now western India around the same time. Collectively, the Deccan Traps spewed enough noxious gas that some say it was the cause of the extinction.
Then there's a weird crater-looking structure right next door to the Deccan Traps. If that turns out to be from an asteroid impact, it would be the largest crater found on Earth. Ever. And just this week, a study in the journal Geology reported there may have been yet another impact, in the Ukraine.
For those keeping track at home, that's three possible asteroid impacts and one long-lived supervolcano all clustered around roughly the same moment in geologic history.
On its own, the newly discovered Boltysh crater in central Ukraine isn't much to write home about -- measuring just 24 kilometers (15 miles) in diameter, it isn't enough to ruin dinosaurs' day throughout Europe, let alone around the globe.
What it does do is make the case that Earth was hit by an asteroid shower around 65 million years ago, rather than a single space rock. On average, a crater the size of Boltysh or bigger should hit Earth once every million years or so. But according to David Jolley of King's College in Abaerdeen, U.K. and a team of researchers, Bolytsh slammed into the planet less than 5,000 years before the giant Chicxulub impact in Mexico.
The odds of the two rocks being part of a binary system is small -- if they were, they should have hit simultaneously. But they're still suspiciously close together, suggesting that perhaps some great collision in the solar system sent a scatter-shot of space rocks headed our way.
Meanwhile, we still have to contend with the Shiva structure, a 500 km-wide gouge in the planet off India that could be the scar left by an asteroid several times bigger than the one that caused the Chicxulub crater.
But some crucial evidence is still missing. For one thing, such a large impact should have thrown out huge quantities of superhot ejecta. Near the Mexican crater, the pile of melted material is several feet thick. But nothing like that has been found in India.
Then there are the Deccan Traps. For several hundred thousand years, western India was home to volcanic eruptions far larger than anything that has occurred in human history. These epic floods of molten rock are thought to have spanned 200,000 square miles (the size of California, New Mexico, Arizona, And Colorado combines), and in some places they are close to two miles thick.
One theory suggests the Deccan lavas spewed immense amounts of poisonous sulfur dioxide gas into the atmosphere. The gas would have the dual effect of choking air-breathing animals and preventing sunlight from reaching Earth's surface. What dinosaurs didn't succumb directly to the gas would've surely perished in the long winter that followed.
It's even been suggested that the Chicxulub impact caused the Deccan Traps to erupt, by way of a huge earthquake that rippled through the planet.
Such ideas may sound a little ludicrous, but that doesn't mean they're wrong. Life is, generally speaking, very resilient, and dinosaurs were no exception. It would have taken a huge cataclysm -- maybe even several in quick succession -- to end their over 150-million-year reign on Earth.
* Amateur rocket builders plan to test their human space launch vehicle as early as this week from the Baltic Sea.
* A crash dummy will ride aboard the rocket during the test flights.
* The group figures they can develop the system for about the price of a family car ($64,000).
Copenhagen Suborbitals' suborbital rocket may have its first test flight this week. Click to enlarge this image.
Kristian von Bengtson and Peter Madsen of Denmark don't have a death wish, or even a mid-life crisis. Yet they're the first to admit that their efforts to put themselves in space on home-built rockets certainly begs the question.
"This project might be daring or extreme but we're never going to be foolish. We're not going to say something like, 'This might work, let's try it,' but obviously we set our own standards," said von Bengtson, 36, an architect who specializes in human spacecraft design.
"We're not going to kill ourselves," he told Discovery News.
Von Bengtson has worked with U.S. government space contractors before, an experience he enjoyed but one that left him unfulfilled. "At NASA, you work on interesting projects, but they're not used for 20 or 30 years, or they may get canceled," he said.
Working on his own launch system was mostly a dream until early 2008 when he met Madsen, a fellow space enthusiast and rocket expert who shared the dream. They formed a nonprofit organization called Copenhagen Suborbitals and, with corporate donations and volunteer labor, started designing and building their own human space launch vehicle.
A major milestone is set for as early as this week when the men launch for the first time their suborbital rocket, a solid-propellant, liquid oxidizer affair called HEAT-1X Tycho Brahe (named after a 16th century Danish discoverer of a supernova).
Von Bengtson says he won't be disappointed if the rocket fails. "There's a good chance of that," he said. "Basically we're just going to go out there and push the button and build a new rocket -- no matter what happens."
Additional test flights will follow over the next three to 10 years, von Bengtson says, before he and Madsen, 39, take turns trapping themselves inside the one-person capsule and blasting off for a suborbital ride to space.
A crash dummy will be the occupant for the rocket's debut flight, which will take place from a platform in the Baltic Sea. (In testament to the duo's technical expertise, they also built the mini-submarine that will haul the platform out into the ocean.)
The main advantage of launching at sea, says von Bengtson, is the lack of government regulations.
"It's very difficult for us who are building rockets to find places to launch them. If you go into international waters, you only have to cooperate with those few authorities that are left," he said.
Those regulatory loopholes don't apply to companies and groups operating from the United States, added John Gedmark, executive director of the Washington, D.C.-based Commercial Spaceflight Federation, an advocacy group.
"The agreement between nations is that nations are fully responsible, fully liable for any and all damages for a rocket launching under their flag, no matter where they're launched from anywhere in the world," Gedmark told Discovery News.
"Obviously, the requirement that you have to meet in terms of safety and collateral damage to the uninvolved public is a lot easier to meet if you're out in the middle of nowhere," he said.
The project apparently has the blessing of the Danish government, which is lending Copenhagen Suborbitals a National Guard ship and crew to try to retrieve their rocket and capsule after the flight, according to von Bengtson.
The exact launch date will depend on the weather, which is notoriously fickle this time of year. Von Bengtson and Madsen plan to remotely launch the rocket from aboard a ship about two miles away.
While the ultimate outcome of the project will be to fly themselves and eventually others in space, von Bengtson said he's happy just to be working on a rapid-development human space flight program. "Being able to do this every day is what I want," he said. "It's more of a process rather than an actual goal."
Freshly transformed from a tadpole, a young Microhyla nepenthicola frog faces off with Abraham Lincoln on a U.S. penny.
An adult male of the new species is about the size of a pea. Their size makes them hard to spot, but fortunately for scientists, these mini-frogs have a loud croak.
"You often get tiny frogs making quite a noise," said Robin Moore, a herpetologist who was not involved in the discovery.
Moore is heading a Conservation International project to rediscover a hundred species of "lost" amphibians that have been declared extinct within the past decade.
Das, the co-discoverer of the new Bornean micro-frog, will join Moore in Indonesia in September to search for the Sambas stream toad (picture), last seen in the 1950s.
There may not be quite as many bombs falling from the sky. But don’t let that fool you. The United States has dramatically escalated its air war over Afghanistan.
Spy plane flights have nearly tripled in the past year; supply drops, too. There are even more planes buzzing over the heads of troops caught in firefights (.pdf), according to statistics provided to Danger Room by the Air Force (.pdf).
The increased numbers show how the American military has retooled its most potent technological advantage — dominance of the skies — for the Afghanistan campaign. But so far, at least, the boost in air power doesn’t seem to have shifted the war’s momentum back to the American-led coalition.
An influx of Reaper drones and executive-jets-turned-spy-planes allowed U.S. forces to fly 9,700 surveillance sorties over Afghanistan in the first seven months of 2010. Last year, American planes conducted 3,645 of the flights during a similar period.
The United States may not have reconnaissance flights “blotting out the sun,” as one senior defense official predicted. But there are many more than before — mostly providing overhead footage of the battlefield to troops on the ground. In addition, more than 30 million pounds of gear was airdropped from January through July 2010 — compared to 11 million through July 2009.
Also, 398,000 people were transported into, out of and inside the Afghan theater. In the first seven months of 2009, that number was 212,000.
It wasn’t long ago that Defense Secretary Robert Gates was in an all-but-open war with the U.S. Air Force, when the service didn’t seem to be moving fast enough to meet commanders’ needs in Iraq and Afghanistan. The Air Force had fewer than a dozen unmanned air patrols over the war zones in 2007. Today, there are more than 40. The battles between Gates and the air generals have largely subsided.
“Today, unlike the contests of the past, our joint forces go into combat with more information about the threat they face, provided in near real-time. And they get that information … from air and space,” e-mails retired Lt. Gen. David Deptula, who stepped down this month as the Air Force’s intelligence chief. “Today, unlike the past, our joint task forces are able to operate with much smaller numbers, across great distances and inhospitable terrain because they can be sustained over the long-haul … by air.”
When Gen. Stanley McChrystal imposed strict new guidelines on airstrikes, the number of attacks from the sky immediately dropped in half. Many pilots weren’t sure exactly why they were flying. Some troops complained that they couldn’t fight the Taliban effectively.
But during the last few months of McChrystal’s tenure, those airstrike numbers had stabilized, and began to move ahead of their mid-2009 lows. In June and July of 2010, the Air Force flew 5,500 “close air support” sorties — missions over ground troops locked in active combat. On 900 of those flights, the planes fired weapons. The previous year, those figures were 4,600 and 809, respectively.
The unanswered question, of course, is whether all this extra air power will have much of an effect. Right now, NATO has more troops going into more places and encountering more resistance than at any point in the war.
Violence is way up. And it’s not clear if additional eyes in the sky or warplanes buzzing overhead will alter that lethal equation.
Large changes in the sun's energy output may drive unexpectedly dramatic fluctuations in Earth's outer atmosphere.
Results of a new study link a recent, temporary shrinking of a high atmospheric layer with a sharp drop in the sun's ultraviolet radiation levels.
The research, led by scientists at the National Center for Atmospheric Research (NCAR) in Boulder, Colo., and the University of Colorado at Boulder (CU), indicates that the sun's magnetic cycle, which produces differing numbers of sunspots over an approximately 11-year cycle, may vary more than previously thought.
The results, published in the American Geophysical Union journal Geophysical Research Letters, are funded by NASA and by the National Science Foundation (NSF), NCAR's sponsor.
"This research makes a compelling case for the need to study the coupled sun-Earth system," says Farzad Kamalabadi, program director in NSF's Division of Atmospheric and Geospace Sciences, "and to illustrate the importance of solar influences on our terrestrial environment with both fundamental scientific implications and societal consequences."
The findings may have implications for orbiting satellites, as well as for the International Space Station.
"Our work demonstrates that the solar cycle not only varies on the typical 11-year time scale, but also can vary from one solar minimum to another," says lead author Stanley Solomon, a scientist at NCAR's High Altitude Observatory. "All solar minima are not equal."
The fact that the layer in the upper atmosphere known as the thermosphere is shrunken and dense means that satellites can more easily maintain their orbits.
But it also indicates that space debris and other objects that pose hazards may persist longer in the thermosphere.
"With lower thermospheric density, our satellites will have a longer life in orbit," says CU professor Thomas Woods, a co-author.
"This is good news for those satellites that are actually operating, but it is also bad because of the thousands of non-operating objects remaining in space that could potentially have collisions with our working satellites."
The sun's energy output declined to unusually low levels from 2007 to 2009, a particularly prolonged solar minimum during which there were virtually no sunspots or solar storms.
During that same period of low solar activity, Earth's thermosphere shrank more than at any time in the 43-year era of space exploration.
The thermosphere, which ranges in altitude from about 55 to more than 300 miles (90 to 500 kilometers), is a rarified layer of gas at the edge of space where the sun's radiation first makes contact with Earth's atmosphere.
It typically cools and becomes less dense during low solar activity.
But the magnitude of the density change during the recent solar minimum appeared to be about 30 percent greater than would have been expected by low solar activity.
The study team used computer modeling to analyze two possible factors implicated in the mystery of the shrinking thermosphere.
They simulated both the impacts of solar output and the role of carbon dioxide, a potent greenhouse gas that, according to past estimates, is reducing the density of the outer atmosphere by about 2 percent to 5 percent per decade.
Their work built on several recent studies.
Earlier this year, a team of scientists from the Naval Research Laboratory and George Mason University, measuring changes in satellite drag, estimated that the density of the thermosphere declined in 2007-09 to about 30 percent less than during the previous solar minimum in 1996.
Other studies by scientists at the University of Southern California and CU, using measurements from sub-orbital rocket flights and space-based instruments, have estimated that levels of extreme-ultraviolet radiation-a class of photons with extremely short wavelengths-dropped about 15 percent during the same period.
However, scientists remained uncertain whether the decline in extreme-ultraviolet radiation would be sufficient to have such a dramatic impact on the thermosphere, even when combined with the effects of carbon dioxide.
To answer this question, Solomon and his colleagues turned to an NCAR computer tool, known as the Thermosphere-Ionosphere-Electrodynamics General Circulation Model.
They used the model to simulate how the sun's output during 1996 and 2008 would affect the temperature and density of the thermosphere.
They also created two simulations of thermospheric conditions in 2008-one with a level that approximated actual carbon dioxide emissions and one with a fixed, lower level.
The results showed the thermosphere cooling in 2008 by 41 kelvins, or K (about 74 degrees Fahrenheit) compared to 1996, with just 2 K attributable to the carbon dioxide increase.
The results also showed the thermosphere's density decreasing by 31 percent, with just 3 percent attributable to carbon dioxide, and closely approximated the 30 percent reduction in density indicated by measurements of satellite drag.
"It is now clear that the record low temperature and density were primarily caused by unusually low levels of solar radiation at the extreme-ultraviolet level," Solomon says.
Woods says the research indicates that the sun could be going through a period of relatively low activity, similar to periods in the early 19th and 20th centuries.
This could mean that solar output may remain at a low level for the near future.
"If it is indeed similar to certain patterns in the past, then we expect to have low solar cycles for the next 10 to 30 years," Woods says.
In a bid to unlock longstanding mysteries of the Sun, including the impacts on Earth of its 11-year cycle, an international team of scientists has successfully probed a distant star. By monitoring the star's sound waves, the team has observed a magnetic cycle analogous to the Sun's solar cycle.
The study, conducted by scientists at the National Center for Atmospheric Research (NCAR) and colleagues in France and Spain, is being published in Science.
The scientists studied a star known as HD49933, which is located 100 light years from Earth in the constellation Monoceros, the Unicorn, just east of Orion. The team examined the star's acoustic fluctuations, using a technique called "stellar seismology." They detected the signature of "starspots," areas of intense magnetic activity on the surface that are similar to sunspots. While scientists have previously observed these magnetic cycles in other stars, this was the first time they have discovered such a cycle using stellar seismology.
"Essentially, the star is ringing like a bell," says NCAR scientist Travis Metcalfe, a co-author of the new study. "As it moves through its starspot cycle, the tone and volume of the ringing changes in a very specific pattern, moving to higher tones with lower volume at the peak of its magnetic cycle."
"We've discovered a magnetic activity cycle in this star, similar to what we see with the Sun," says co-author and NCAR scientist Savita Mathur. "This technique of listening to the stars will allow us to examine potentially hundreds of stars."
The team hopes to assess the potential for other stars in our galaxy to host planets, including some perhaps capable of sustaining life.
"Understanding the activity of stars harboring planets is necessary because magnetic conditions on the star's surface could influence the habitable zone, where life could develop," says CEA-Saclay scientist Rafael Garcia, the study's lead author.
Studying many stars with stellar seismology could help scientists better understand how magnetic activity cycles can differ from star to star, as well as the processes behind such cycles. The work could especially shed light on the magnetic processes that go on within the Sun, furthering our understanding of its influence on Earth's climate. It may also lead to better predictions of the solar cycle and resulting geomagnetic storms that can cause major disruption to power grids and communication networks.
In addition to NCAR, the team's scientists are from France's Center for Nuclear Studies of Saclay (CEA-Saclay), Paris/Meudon Observatory (OPM), the University of Toulouse, and Spain's Institute of Astrophysics of the Canaries (IAC). The research was funded by the National Science Foundation, which is NCAR's sponsor, the CEA, the French Stellar Physics National Research Plan, and the Spanish National Research Plan.
The scientists examined 187 days of data captured by the international Convection Rotation and Planetary Transits (CoRoT) space mission.
Launched on December 27, 2006, CoRoT was developed and is operated by the French National Center for Space Studies (CNES) with contributions from Austria, Belgium, Brazil, Germany, Spain, and the European Space Agency. CoRoT is equipped with a 27-centimeter (11-inch) diameter telescope and a 4-CCD (charge-coupled device) camera sensitive to tiny variations in the light intensity from stars.
The study authors found that HD49933 is much bigger and hotter than the Sun, and its magnetic cycle is much shorter. Whereas past surveys of stars have found cycles similar to the 11-year cycle of the Sun, this star has a cycle of less than a year.
This short cycle is important to scientists because it may enable them to observe an entire cycle more quickly, thereby gleaning more information about magnetic patterns than if they could only observe part of a longer cycle.
The scientists plan to expand their observations by using other stars observed by CoRoT as well as data from NASA's Kepler mission, launched in March 2009. Kepler is seeking Earth-sized planets to survey. The mission will provide continuous data over three to five years from hundreds of stars that could be hosting planets.
"If it turns out that a short magnetic cycle is common in stars, then we will potentially observe a large number of full cycles during Kepler's mission," says Metcalfe. "The more stars and complete magnetic cycles we have to observe, the more we can place the Sun into context and explore the impacts of magnetic activity on possible planets hosted by these stars."
The team has spent the past six months exploring the structure and dynamics of HD49933 and classifying its size. They will next verify their observations using ground-based telescopes to confirm the magnetic activity of the star. When the star reemerges from behind the Sun in September, they hope to measure the full length of the cycle. The CoRoT mission was designed to collect up to 150 days of continuous data at a time, which was not enough to determine the exact length of the star's cycle.
A Stanford mechanical engineer is using the biology of a gecko's sticky foot to create a robot that climbs. In the same way the small reptile can scale a wall of slick glass, the Stickybot can climb smooth surfaces with feet modeled on the intricate design of gecko toes.
Mark Cutkosky, the lead designer of the Stickybot, a professor of mechanical engineering and co-director of the Center for Design Research, has been collaborating with scientists around the nation for the last five years to build climbing robots.
After designing a robot that could conquer rough vertical surfaces such as brick walls and concrete, Cutkosky moved on to smooth surfaces such as glass and metal. He turned to the gecko for ideas.
"Unless you use suction cups, which are kind of slow and inefficient, the other solution out there is to use dry adhesion, which is the technique the gecko uses," Cutkosky said.
Wonders of the gecko toe
The toe of a gecko's foot contains hundreds of flap-like ridges called lamellae. On each ridge are millions of hairs called setae, which are 10 times thinner than a human's. Under a microscope, you can see that each hair divides into smaller strands called spatulae, making it look like a bundle of split ends. These split ends are so tiny (a few hundred nanometers) that they interact with the molecules of the climbing surface.
The interaction between the molecules of gecko toe hair and the wall is a molecular attraction called van der Waals force. A gecko can hang and support its whole weight on one toe by placing it on the glass and then pulling it back. It only sticks when you pull in one direction -- their toes are a kind of one-way adhesive, Cutkosky said.
"It's very different from Scotch tape or duct tape, where, if you press it on, you then have to peel it off. You can lightly brush a directional adhesive against the surface and then pull in a certain direction, and it sticks itself. But if you pull in a different direction, it comes right off without any effort," he said.
Robots with gecko feet
One-way adhesive is important for climbing because it requires little effort to attach and detach a robot's foot.
"Other adhesives are sort of like walking around with chewing gum on your feet: You have to press it into the surface and then you have to work to pull it off. But with directional adhesion, it's almost like you can sort of hook and unhook yourself from the surface," Cutkosky said.
After the breakthrough insight that direction matters, Cutkosky and his team began asking how to build artificial materials for robots that create the same effect. They came up with a rubber-like material with tiny polymer hairs made from a micro-scale mold.
The designers attach a layer of adhesive cut to the shape of Stickybot's four feet, which are about the size of a child's hand. As it steadily moves up the wall, the robot peels and sticks its feet to the surface with ease, resembling a mechanical lizard.
The newest versions of the adhesive, developed in 2009, have a two-layer system, similar to the gecko's lamellae and setae. The "hairs" are even smaller than the ones on the first version -- about 20 micrometers wide, which is five times thinner than a human hair. These versions support higher loads and allow Stickybot to climb surfaces such as wood paneling, painted metal and glass.
The material is strong and reusable, and leaves behind no residue or damage. Robots that scale vertical walls could be useful for accessing dangerous or hard to reach places.
The team's new project involves scaling up the material for humans. A technology called Z-Man, which would allow humans to climb with gecko adhesive, is in the works.
Cutkosky and his team are also working on a Stickybot successor: one that turns in the middle of a climb. Because the adhesive only sticks in one direction, turning requires rotating the foot.
"The new Stickybot that we're working on right now has rotating ankles, which is also what geckos have," he said.
"Next time you see a gecko upside down or walking down a wall head first, look carefully at the back feet, they'll be turned around backward. They have to be; otherwise they'll fall."
Cutkosky has collaborated with scientists from Lewis & Clark College, the University of California-Berkeley, the University of Pennsylvania, Carnegie Mellon University and a robot-building company called Boston Dynamics. His project is funded by the National Science Foundation and the Defense Advanced Research Projects Agency.
New View of Tectonic Plates: Computer Modeling of Earth's Mantle Flow, Plate Motions, and Fault Zones
ScienceDaily (Aug. 30, 2010) — Computational scientists and geophysicists at the University of Texas at Austin and the California Institute of Technology (Caltech) have developed new computer algorithms that for the first time allow for the simultaneous modeling of Earth's mantle flow, large-scale tectonic plate motions, and the behavior of individual fault zones, to produce an unprecedented view of plate tectonics and the forces that drive it.
A paper describing the whole-earth model and its underlying algorithms will be published in the August 27 issue of the journal Science and also featured on the cover.
The work "illustrates the interplay between making important advances in science and pushing the envelope of computational science," says Michael Gurnis, the John E. and Hazel S. Smits Professor of Geophysics, director of the Caltech Seismological Laboratory, and a coauthor of the Science paper.
To create the new model, computational scientists at Texas's Institute for Computational Engineering and Sciences (ICES) -- a team that included Omar Ghattas, the John A. and Katherine G. Jackson Chair in Computational Geosciences and professor of geological sciences and mechanical engineering, and research associates Georg Stadler and Carsten Burstedde -- pushed the envelope of a computational technique known as Adaptive Mesh Refinement (AMR).
Partial differential equations such as those describing mantle flow are solved by subdividing the region of interest (such as the mantle) into a computational grid. Ordinarily, the resolution is kept the same throughout the grid. However, many problems feature small-scale dynamics that are found only in limited regions. "AMR methods adaptively create finer resolution only where it's needed," explains Ghattas. "This leads to huge reductions in the number of grid points, making possible simulations that were previously out of reach."
"The complexity of managing adaptivity among thousands of processors, however, has meant that current AMR algorithms have not scaled well on modern petascale supercomputers," he adds. Petascale computers are capable of one million billion operations per second. To overcome this long-standing problem, the group developed new algorithms that, Burstedde says, "allows for adaptivity in a way that scales to the hundreds of thousands of processor cores of the largest supercomputers available today."
With the new algorithms, the scientists were able to simulate global mantle flow and how it manifests as plate tectonics and the motion of individual faults. According to Stadler, the AMR algorithms reduced the size of the simulations by a factor of 5,000, permitting them to fit on fewer than 10,000 processors and run overnight on the Ranger supercomputer at the National Science Foundation (NSF)-supported Texas Advanced Computing Center.
A key to the model was the incorporation of data on a multitude of scales. "Many natural processes display a multitude of phenomena on a wide range of scales, from small to large," Gurnis explains. For example, at the largest scale -- that of the whole earth -- the movement of the surface tectonic plates is a manifestation of a giant heat engine, driven by the convection of the mantle below. The boundaries between the plates, however, are composed of many hundreds to thousands of individual faults, which together constitute active fault zones. "The individual fault zones play a critical role in how the whole planet works," he says, "and if you can't simulate the fault zones, you can't simulate plate movement" -- and, in turn, you can't simulate the dynamics of the whole planet.
In the new model, the researchers were able to resolve the largest fault zones, creating a mesh with a resolution of about one kilometer near the plate boundaries. Included in the simulation were seismological data as well as data pertaining to the temperature of the rocks, their density, and their viscosity -- or how strong or weak the rocks are, which affects how easily they deform. That deformation is nonlinear -- with simple changes producing unexpected and complex effects.
"Normally, when you hit a baseball with a bat, the properties of the bat don't change -- it won't turn to Silly Putty. In the earth, the properties do change, which creates an exciting computational problem," says Gurnis. "If the system is too nonlinear, the earth becomes too mushy; if it's not nonlinear enough, plates won't move. We need to hit the 'sweet spot.'"
After crunching through the data for 100,000 hours of processing time per run, the model returned an estimate of the motion of both large tectonic plates and smaller microplates -- including their speed and direction. The results were remarkably close to observed plate movements.
In fact, the investigators discovered that anomalous rapid motion of microplates emerged from the global simulations. "In the western Pacific," Gurnis says, "we have some of the most rapid tectonic motions seen anywhere on Earth, in a process called 'trench rollback.' For the first time, we found that these small-scale tectonic motions emerged from the global models, opening a new frontier in geophysics."
One surprising result from the model relates to the energy released from plates in earthquake zones. "It had been thought that the majority of energy associated with plate tectonics is released when plates bend, but it turns out that's much less important than previously thought," Gurnis says. "Instead, we found that much of the energy dissipation occurs in the earth's deep interior. We never saw this when we looked on smaller scales."
Paul the Octopus -- the eight-legged oracle who made international headlines with his amazingly accurate football forecasting -- isn't the only talented cephalopod in the sea. The Indonesian mimic octopus, which can impersonate flatfish and sea snakes to dupe potential predators, may well give Paul a run for his money when it comes to "see-worthy" skills.
By creatively configuring its limbs, adopting characteristic undulating movements, and displaying conspicuous color patterns, the mimic octopus (Thaumoctopus mimicus) can successfully pass for a number of different creatures that share its habitat, several of which are toxic. Now, scientists from the California Academy of Sciences and Conservation International Indonesia have conducted DNA analysis to determine how this remarkable adaptation evolved. The research is reported in the September 2010 issue of the Biological Journal of the Linnean Society.
Like its relatives, the mimic octopus is very capable of hiding from hungry predators by blending into its background. However, this talented species often chooses to make itself more conspicuous to predators by mimicking flatfish, lionfish or sea snakes that display high-contrast color patterns. This daredevil maneuver is thought to help T. mimicus confuse or scare away predators. Because it is relatively rare for an animal to develop such a high-risk, conspicuous defense strategy, the authors of the recent study hoped to gain insight into the evolutionary forces that fueled this behavior by conducting genetic research on the mimic octopus and its relatives. They focused on the mimic's ability to flatten its arms and head and swim along the sea floor like a flatfish, while simultaneously exhibiting a bold, brown-and-white color pattern.
Using DNA sequences to construct a genealogy for the mimic octopus and more than 35 of its relatives, the researchers ascertained the order in which the T. mimicus lineage evolved several key traits: 1) First, T. mimicus ancestors evolved the use of bold, brown-and-white color displays, employed as a secondary "shock" defense to surprise predators if camouflage fails. 2) Next, they developed the flatfish swimming technique and the long arms that facilitate this motion. 3) Finally, T. mimcus began displaying bold color patterns while impersonating a flatfish -- both during daily forays away from its den and at rest. In evolutionary terms, this last step represents an extremely risky shift in defense strategy.
"The close relatives of T. mimicus use drab colors and camouflage quite successfully to hide from predators," says Dr. Christine Huffard, Marine Conservation Priorities Advisor at Conservation International Indonesia. "Why does T. mimicus instead draw attention to itself, and repeatedly abandon the camouflage abilities it inherited from its ancestors in favor of a bold new pattern? Somehow, through natural selection, being conspicuous has allowed T. mimicus to survive and reproduce more successfully than some of its less showy ancestors, and eventually evolve into its own lineage."
The researchers suggest several possibilities for why this bold coloration would be advantageous. It may fool predators into thinking the octopus is a toxic flatfish (such as the peacock sole, Pardachirus pavoninus, or the zebra sole, Zebrias spp.); it may obscure the octopus's outline against the black-and-white sandy bottoms; or it may serve as an honest warning sign of the mimic's unpalatable flesh.
"While T. mimicus's imitation of flatfish is far from perfect, it may be 'good enough' to fool predators where it lives, in the world's center of marine biodiversity," says Dr. Healy Hamilton, Director of the Center of Applied Biodiversity Informatics at the California Academy of Sciences. "These octopuses can change their color pattern to look similar to -- but not exactly like -- numerous toxic and non-toxic flatfishes in their area. In the time it takes a predator to do a double-take, the octopus may be able to get away."
Undescribed by scientists until 1998, much remains unknown about the mimic octopus. Future research will focus on observing T. mimicus in the wild in Indonesia, so that scientists can assess the possible reasons for its bold coloration and better understand the costs and benefits of this strategy.
"This study reminds us that evolution does not have an endgame, but is a continuous process," says Huffard. "These octopuses will continue evolving as long as we can protect them and their habitat from threats like trawling, land reclamation, and run-off."
These findings were published in Huffard CL, Saarman N, Hamilton H, and Simison WB. 2010. The evolution of conspicuous facultative mimicry in octopus: an example of secondary adaptation? Biological Journal of the Linnean Society 101: 68-77.
viernes, 27 de agosto de 2010
* Ötzi died a violent death in the spring and was carried up to a high pass five months later for a ceremonial burial, according to a new study.
* A map of the body and artifacts indicates that the Iceman was buried on a stone platform.
* Over time, the body and the objects moved in semi-melted ice until they were found 19 years ago.
This facial reconstruction shows what Ötzi the Iceman may have looked like when he was alive. Click to enlarge this image.
Ötzi the Iceman, the 5,300-year-old mummy found in the Italian Alps, may have been ceremonially buried, according to a study which mapped the items found near the frozen corpse.
According to research published in the journal Antiquity, the melting glacier in the Ötztal Alps, where the well preserved mummy was found in 1991, was not the site of a murder, but a solemn burial ceremony.
"Our reconstruction suggests that Ötzi died at at lower altitude in early-mid spring, and was then buried up on the mountain with his goods in late summer or early autumn," Luca Bondioli of the National Museum of Prehistory and Ethnology in Rome told the News.
Pollen found in the mummy's gut indicated that Ötzi died in April, while pollen within the ice suggested the corpse was deposited there in August or September. The theory would explain this mismatch.
The hypothesis is the last of a long series of speculation over the Iceman.
Prior to the discovery in 2001 of an arrowhead in the mummy's left shoulder, researchers believed Ötzi died at about age 45 from cold and hunger, or was the victim of a ritual sacrifice.
Further investigations established that the mortally wounded man froze at a high altitude with his tools and personal items, succumbing to the arrowhead that hit his left subclavian artery. He was escaping from a tribal clash, researchers theorized.
"Interestingly, such reconstruction has never been supported by the publication of a detailed map of the items found over the Iceman site," Bondioli said.
Bondioli and colleagues investigated the geomorphology of the site where Ötzi was found, a shallow depression between two low ridges.
Some five meters (16.4 feet) away, they noticed a small rock platform. The platform, which they believe was Ötzi's burial site, was connected by a natural fissure to the depression where the mummy was found 19 years ago.
The researchers used this information to create the first comprehensive distribution map of the body and other artifacts, which they believe are funerary items rather than mountain equipment.
Among the 466 items found at the site were a dagger, a backpack frame, an ax, a quiver, a birch-bark container, a grass mat, a bow and a pelt cap.
The researchers plotted the distribution of the items on a digital model of the Iceman site. The model suggested that over time, Ötzi and the objects moved in semi-melted ice and slumped into the lower depression through the fissure.
"The bow and ax were captured, and the backpack frame stopped against a protruding rock," Bondioli said.
According to the researchers, the corpse would have turned prone, with the feet towards the north and the arms hanging down, like a body floating in dense fluid. It then stopped against the boulder where it was found in 1991.
"Here the left arm, trapped against the boulder, was slowly twisted to a peculiar angle, following the down slope flow traction of the body. A few lighter and hollow items like the quiver floated away to the northern edge of the basin," Bondioli said.
But Frank Rühli, head of the Swiss Mummy Project at the University of Zurich and one of the experts who investigated the mummy, argues it's is unlikely that Ötzi's unnatural posture, with the left arm bent across the chest, was the result of a post-mortem event.
What may end up being the crown jewel of the International Space Station -- and at a cost of $2 billion its most expensive bauble -- landed at the Kennedy Space Center in Florida on Thursday to begin preparations for launch on the last shuttle mission in February.
On-hand to witness the arrival of the Alpha Magnetic Spectrometer, a 15,000-pound particle detector designed to probe high-energy cosmic rays for clues about how the universe formed, was project leader Samuel Ting, a Nobel laureate and MIT physicist who turns out to have a pretty sharp head for diplomacy and politics.Organizing and coordinating a team of scientists, engineers and technicians from 56 agencies, research organizations and universities in 16 countries -- including China AND Taiwan -- was tough enough. But when AMS was booted off the shuttle in the wake of the 2003 Columbia accident, Ting launched a lobbying campaign to win funding and support for an extra shuttle flight to get AMS to the station. Then, he got NASA to agree to delay the launch so that AMS could be outfitted with a longer-lasting magnet.
AMS had been designed with a superconducting magnet that had enough liquid helium coolant to last about three years. With the shuttle flights ending, however, there would be no way to return AMS to Earth for servicing and possible return to the station. Another factor in swapping out the magnet was the growing, near-unanimous support by Congress and the Obama administration to continue funding the space station beyond 2015.
WIDE ANGLE: Supplying the International Space Station
“The end of last year we learned that space station will go to 2020 and maybe even go to 2028, so after three years, if we don’t change (the magnet) AMS would become a museum piece,” Ting told reporters gathered at the shuttle’s landing strip just before an Air Force C-5 cargo plane touched down with AMS aboard.
“I’m very pleased to be here,” Ting said. “It took us almost 15 years to get here.”
Joining Ting and about a dozen other scientists and program managers at Kennedy Space Center for AMS's arrival were the seven astronauts assigned who will fly with the detector aboard shuttle Endeavour in February. AMS, which was assembled and tested at CERN in Geneva, is expected to be attached to the outside of the station’s truss using the shuttle and station’s robot arms.
“Sam, I give you our guarantee we're not going to break it,” Endeavour commander Mark Kelly told Ting. "It will get installed on the truss and hopefully be working before we depart."
From its vantage point above Earth’s atmosphere, AMS is designed to track high-energy cosmic rays and determine the properties of their subatomic particles, a research effort, which if successful, could uncover antimatter or unravel the mystery of dark matter, which is believed to fill 90 percent our our universe.
"At the moment there was a big bang, there must be equal amounts of matter and antimatter," Ting said. "Now antimatter has been found in accelerators. The question is, is there a universe far, far, far away made out of antimatter? That is one example. The second example is we know 90 percent of the matter in the universe we cannot see. We know it exists... This experiment will provide the most sensitive search for the dark matter."
“When you build a new instrument you ask the best physicists to make a judgment, but physicists make a judgment based on existing knowledge. Discovery is to break down existing knowledge, so when you enter into new territory with a precision instrument it’s very difficult to say what you will see,” Ting said.
“The science potential for AMS is just incredible,” added NASA AMS program manager Mark Sistilli. “In a very, very real sense, AMS could open up a whole new chapter in exploration of the galaxy and beyond.
This amoeba-shaped depression on Mars, called Orcus Patera, has had planetary scientists scratching their heads for decades. Despite these sharp new images from the European Space Agency’s Mars Express spacecraft, the crater’s origin is a complete mystery.
Orcus Patera, discovered in 1965 by the Mariner 4 spacecraft, is located near Mars’s equator, between the volcanoes Elysium Mons and Olympus Mons. At 236 miles long, it would stretch from New York to Boston on Earth. Its rim rises over a mile above the surrounding plains, and its floor lies 1,300 to 1,900 feet below its surroundings.
But in spite of lying between two volcanoes and its designation as a patera — the name for deep, complex or irregularly shaped volcanic craters — scientists aren’t at all sure that Orcus Patera has a volcanic origin story. It could be a large impact crater that was originally round but later deformed by compressional forces. Or it could have formed after the erosion of aligned impact craters. The most likely explanation is that it was made in an oblique impact, when a small body struck the surface at a very shallow angle, like a rock skipping on a pond.
The new images show that the crater’s rim is criss-crossed by rift-valley-like structures called graben, which are evidence for active tectonic forces in the area. Smaller graben are also visible inside the depression itself, suggesting that several tectonic events have stretched the ground. The depression also shows “wrinkle edges,” which indicate that the ground has been compressed as well as stretched. The dark shapes near the center of the depression were probably formed when dark material dug up by small impacts in the depression was blown around by the wind.
But these features all appeared after Orcus Patera was formed. The oblong crater’s origin is still a mystery.
Images: ESA/DLR/FU Berlin (G. Neukum). More images available on the ESA website
The North American continent is not one thick, rigid slab, but a layer cake of ancient, 3 billion-year-old rock on top of much newer material probably less than 1 billion years old, according to a new study by seismologists at the University of California, Berkeley.
The finding, which is reported in the Aug. 26 issue of Nature, explains inconsistencies arising from new seismic techniques being used to explore the interior of the Earth, and illuminates the mystery of how the Earth's continents formed.
"This is exciting because it is still a mystery how continents grow," said study co-author Barbara Romanowicz, director of the Berkeley Seismological Laboratory and a UC Berkeley professor of earth and planetary science. "We think that most of the North American continent was constructed in the Archean (eon) in several episodes, perhaps as long ago as 3 billion years, though now, with the present regime of plate tectonics, not much new continent is being formed."
The Earth's original continents started forming some 3 billion years ago when the planet was much hotter and convection in the mantle more vigorous, Romanowicz said. The continental rocks rose to the surface -- much like scum floats to the top of boiling jam -- and eventually formed the lithosphere, Earth's hard outer layer. These old floating pieces of the lithosphere, called cratons, apparently stopped growing about 2 billion years ago as the Earth cooled, though within the last 500 million years, and perhaps for as long as 1 billion years, the modern era of plate tectonics has added new margins to the original cratons, slowly expanding the continents.
"Since the Archean, the continents have been broken up in pieces, glued back together and then broken up again, but those pieces of the very old lithosphere -- very old pieces of continents -- have been there for a very long time," she said.
One of those original continents is the North American craton, located mostly in the Canadian part of North America. The study suggests that what continental lithosphere has been added since the original North American craton formed was scraped off of the ocean floor as it plunged beneath the continent, not deposited from below by plumes of hot material welling up through the mantle.
The history of the Earth's oldest continental plates is vague because details of their interiors are hidden from geologists. The top 40 km of the lithosphere is crust that is chemically distinct from the mantle below, and while activities such as mountain building can dredge up deeper material, mountain building is rare in the planet's stable cratons. The deep interior of the North American craton is known only from so-called xenoliths -- rock inclusions in igneous rock -- or xenocrysts such as diamonds that have been delivered to the surface from deep below by volcanoes.
Seismologists, however, have the ability to probe the Earth's interior thanks to seismic waves from earthquakes around the globe, which can be used much like sound waves are used to probe the interior of the human body. Such seismic tomography has established that the bottom of the North American craton is about 250 km deep at its thickest, thinning out toward the margins where new chunks have been added to the continental lithosphere. Below the rigid lithosphere is the softer asthenosphere, on which the continental and oceanic plates ride.
Romanowicz and UC Berkeley postdoctoral fellow Huaiyu Yuan are testing a new technique, seismic azimuthal anisotropy, to look for the boundary between the lithosphere and asthenosphere. The technique takes advantage of the fact that seismic waves travel faster when moving in the same direction that a rock has been stretched than when traveling across the stretch marks. The difference in speed makes it possible to detect layers that have been stretched in different directions.
"As the lithosphere moves over the asthenosphere, the material gets stretched and acquires texture, which indicates the direction in which the plates are moving," she said.
Surprisingly, they found a sharp boundary 150 kilometers below the surface, far too shallow to be the lithosphere-asthenosphere boundary. The scientists believe that the sharp boundary is between two types of lithosphere: the old craton and the younger material that should match the chemical composition of the sea floor. Their interpretation fits with studies of xenoliths and xenocrysts, which indicate that there are two chemically distinct layers within the Archean crust.
Coincidentally, three years ago, researchers using a popular new technique called receiver function studies detected a sharp boundary below the North American craton at a depth of about 120 km. Receiver function studies take advantage of the fact that seismic waves change character -- converting from a P wave to an S wave, for example -- at sharp boundaries.
"We think they are seeing the same layering we are seeing, a sharp boundary within the lithosphere," Romanowicz said.
The stretch marks revealed by azimuthal anisotropy seem to rule out one theory of how the older continents have accrued more lithosphere.
"One hypothesis was that the bottom part was formed by underplating," Romanowicz said. "You would have a big plume of material, an upwelling, that would get stuck under the root. But what we are observing is not consistent with that. The material would spread in all directions and you would see anisotropy that is pointing like spokes in a bicycle."
"We are seeing a very consistent direction across the whole craton. In the top lithospheric layer the fast axis is, on average, aligned northeast-southwest. In the bottom layer it is aligned more north-south. So underplating doesn't work," she said.
If subduction is adding to the continental lithosphere, on the other hand, the north-south strike of the subduction zones on the east and west sides of the North American craton is consistent with the direction Romanowicz and Yuan found.
"I think our paper will stimulate people to look more carefully at distinguishing the ages of the lithosphere as a function of depth," she said. "Any information we can provide that constrains models of continental formation is really useful to the geodynamicists."
The study was supported by a grant from the Earthscope program of the National Science Foundation, and relied on seismic data from Earthscope, the Geological Survey of Canada and the Northern California Earthquake Data Center
A relatively new type of El Niño, which has its warmest waters in the central-equatorial Pacific Ocean, rather than in the eastern-equatorial Pacific, is becoming more common and progressively stronger, according to a new study by NASA and NOAA. The research may improve our understanding of the relationship between El Niños and climate change, and has potentially significant implications for long-term weather forecasting.
Lead author Tong Lee of NASA's Jet Propulsion Laboratory, Pasadena, Calif., and Michael McPhaden of NOAA's Pacific Marine Environmental Laboratory, Seattle, measured changes in El Niño intensity since 1982. They analyzed NOAA satellite observations of sea surface temperature, checked against and blended with directly-measured ocean temperature data. The strength of each El Niño was gauged by how much its sea surface temperatures deviated from the average. They found the intensity of El Niños in the central Pacific has nearly doubled, with the most intense event occurring in 2009-10.
The scientists say the stronger El Niños help explain a steady rise in central Pacific sea surface temperatures observed over the past few decades in previous studies-a trend attributed by some to the effects of global warming. While Lee and McPhaden observed a rise in sea surface temperatures during El Niño years, no significant temperature increases were seen in years when ocean conditions were neutral, or when El Niño's cool water counterpart, La Niña, was present.
"Our study concludes the long-term warming trend seen in the central Pacific is primarily due to more intense El Niños, rather than a general rise of background temperatures," said Lee.
"These results suggest climate change may already be affecting El Niño by shifting the center of action from the eastern to the central Pacific," said McPhaden. "El Niño's impact on global weather patterns is different if ocean warming occurs primarily in the central Pacific, instead of the eastern Pacific.
"If the trend we observe continues," McPhaden added, "it could throw a monkey wrench into long-range weather forecasting, which is largely based on our understanding of El Niños from the latter half of the 20th century."
El Niño, Spanish for "the little boy," is the oceanic component of a climate pattern called the El Niño-Southern Oscillation, which appears in the tropical Pacific Ocean on average every three to five years. The most dominant year-to-year fluctuating pattern in Earth's climate system, El Niños have a powerful impact on the ocean and atmosphere, as well as important socioeconomic consequences. They can influence global weather patterns and the occurrence and frequency of hurricanes, droughts and floods; and can even raise or lower global temperatures by as much as 0.2 degrees Celsius (0.4 degrees Fahrenheit).
During a "classic" El Niño episode, the normally strong easterly trade winds in the tropical eastern Pacific weaken. That weakening suppresses the normal upward movement of cold subsurface waters and allows warm surface water from the central Pacific to shift toward the Americas. In these situations, unusually warm surface water occupies much of the tropical Pacific, with the maximum ocean warming remaining in the eastern-equatorial Pacific.
Since the early 1990s, however, scientists have noted a new type of El Niño that has been occurring with greater frequency. Known variously as "central-Pacific El Niño," "warm-pool El Niño," "dateline El Niño" or "El Niño Modoki" (Japanese for "similar but different"), the maximum ocean warming from such El Niños is found in the central-equatorial, rather than eastern, Pacific. Such central Pacific El Niño events were observed in 1991-92, 1994-95, 2002-03, 2004-05 and 2009-10. A recent study found many climate models predict such events will become much more frequent under projected global warming scenarios.
Lee said further research is needed to evaluate the impacts of these increasingly intense El Niños and determine why these changes are occurring. "It is important to know if the increasing intensity and frequency of these central Pacific El Niños are due to natural variations in climate or to climate change caused by human-produced greenhouse gas emissions," he said.
Results of the study were published recently in Geophysical Research Letters.
For more information on El Niño, visit: http://sealevel.jpl.nasa.gov/.
NASA's Kepler spacecraft has discovered the first confirmed planetary system with more than one planet crossing in front of, or transiting, the same star.
The transit signatures of two distinct planets were seen in the data for the sun-like star designated Kepler-9. The planets were named Kepler-9b and 9c. The discovery incorporates seven months of observations of more than 156,000 stars as part of an ongoing search for Earth-sized planets outside our solar system. The findings will be published in the journal Science.
Kepler's ultra-precise camera measures tiny decreases in stars' brightness that occur when a planet transits them. The size of the planet can be derived from these temporary dips.
The distance of the planet from a star can be calculated by measuring the time between successive dips as the planet orbits the star. Small variations in the regularity of these dips can be used to determine the masses of planets and detect other non-transiting planets in the system.
In June 2010, Kepler mission scientists submitted findings for peer review that identified more than 700 planet candidates in the first 43 days of Kepler data. The data included five additional candidate systems that appear to exhibit more than one transiting planet. The Kepler team recently identified a sixth target exhibiting multiple transits and accumulated enough followup data to confirm this multi-planet system.
"Kepler's high-quality data and round-the-clock coverage of transiting objects enable a whole host of unique measurements to be made of the parent stars and their planetary systems," said Doug Hudgins, the Kepler program scientist at NASA Headquarters in Washington.
Scientists refined the estimates of the masses of the planets using observations from the W.M. Keck Observatory in Hawaii. The observations show Kepler-9b is the larger of the two planets, and both have masses similar to but less than Saturn. Kepler-9b lies closest to the star, with an orbit of about 19 days, while Kepler-9c has an orbit of about 38 days. By observing several transits by each planet over the seven months of data, the time between successive transits could be analyzed.
"This discovery is the first clear detection of significant changes in the intervals from one planetary transit to the next, what we call transit timing variations," said Matthew Holman, a Kepler mission scientist from the Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass. "This is evidence of the gravitational interaction between the two planets as seen by the Kepler spacecraft."
In addition to the two confirmed giant planets, Kepler scientists also have identified what appears to be a third, much smaller transit signature in the observations of Kepler-9. That signature is consistent with the transits of a super-Earth-sized planet about 1.5 times the radius of Earth in a scorching, near-sun 1.6 day-orbit. Additional observations are required to determine whether this signal is indeed a planet or an astronomical phenomenon that mimics the appearance of a transit.
NASA's Ames Research Center in Moffett Field, Calif., manages Kepler's ground system development, mission operations and science data analysis. NASA's Jet Propulsion Laboratory in Pasadena, Calif., managed Kepler mission development. Ball Aerospace and Technologies Corp. in Boulder, Colo., developed the Kepler flight system and supports mission operations with the Laboratory for Atmospheric and Space Physics at the University of Colorado in Boulder. The Space Telescope Science Institute in Baltimore archives, hosts and distributes the Kepler science data.
For graphics, including new animations, visit http://www.nasa.gov/kepler .
More information about exoplanets and NASA's planet-finding program is at http://planetquest.jpl.nasa.gov
A team of UK researchers, funded by the Biotechnology and Biological Sciences Research Council (BBSRC), has publicly released the first sequence coverage of the wheat genome. The release is a step towards a fully annotated genome and makes a significant contribution to efforts to support global food security and to increase the competitiveness of UK farming.
The genome sequences released comprise five read-throughs of a reference variety of wheat and give scientists and breeders access to 95% of all wheat genes. This is among the largest genome projects undertaken, and the rapid public release of the data is expected to accelerate significantly the use of the information by wheat breeding companies.
The team involved Prof Neil Hall and Dr Anthony Hall at the University of Liverpool, Prof Keith Edwards and Dr Gary Barker at the University of Bristol and Prof Mike Bevan at the John Innes Centre, a BBSRC-funded Institute.
Prof Edwards said: "The wheat genome is five times larger than the human genome and presents a huge challenge for scientists. The genome sequences are an important tool for researchers and for plant breeders and by making the data publicly available we are ensuring this publicly funded research has the widest possible impact."
Universities and Science Minister David Willetts said: "This is an outstanding world class contribution by the UK to the global effort to completely map the wheat genome. By using gene sequencing technology developed in the UK we now have the capability to improve the crops of the future by simply accelerating the natural breeding process to select varieties that can thrive in challenging conditions."
The genome data released are in a 'raw' format, comprising sequence reads of the wheat genome in the form of letters representing the genetic 'code'. A complete copy of the genome requires further read-throughs, significant work on annotation and the assembly of the data into chromosomes. Large-scale, rapid sequencing programmes such as this have been made technically feasible by advanced technology genome sequencing platforms, including one based on BBSRC-funded research conducted in the UK in the 1990s.The majority of the sequencing work for this particular project was done using the 454 Life Science platform, developed in the US.
Prof Hall said: "The genome sequence data of this reference variety, Chinese Spring wheat, will now allow us to probe differences between varieties with different characteristics. By understanding the genetic differences between varieties with different traits we can start to develop new types of wheat better able to cope with drought, salinity or able to deliver higher yields. This will help to protect our food security while giving UK plant breeders and farmers a competitive advantage."
The sequence data can be used by scientists and plant breeders to develop new varieties through accelerated conventional breeding or other technologies.
Prof Bevan, a member of the Coordinating Committee of the International Wheat Genome Sequencing Consortium, said: "The sequence coverage will provide an important foundation for international efforts aimed at generating a complete genome sequence of wheat in the next few years."
Prof Doug Kell, BBSRC Chief Executive, said: "Recent short-term price spikes in the wheat markets have shown how vulnerable our food system is to shocks and potential shortages. The best way to support our food security is by using modern research strategies to understand how we can deliver sustainable increases in crop yields, especially in the face of climate change. Genome sequencing of this type is an absolutely crucial strategy, building on previous BBSRC-funded work. Knowledge of these genome sequences will now allow plant breeders to identify the best genetic sequences to use as markers in accelerated breeding programmes."
Dr Jane Rogers, Member of the Coordinating Committee of the International Wheat Genome Sequencing Consortium and Director of BBSRC's The Genome Analysis Centre, said: "The public release of the wheat genome data will be a useful resource for scientists and the plant breeding community and will provide a foundation to identify genetic differences between wheat varieties. In recent years genomics technology has advanced to a point that scientists can now produce sequence data for plants with genomes as large as wheat at a rate unimaginable a few years ago. This is an impressive achievement, notwithstanding the significant hurdles we still face to fully interpret and understand the data."
A key feature of this research has been the quick release of the data into the public domain to allow other scientists and wheat breeding companies to rapidly employ it in practical applications. Richard Summers, Vice Chairman of the British Society of Plant Breeders, said: "The wheat breeding community has been greatly impressed with the collaborative approach taken in this project. The team brought together world class skills in sequencing and wheat genetics to deal with a major barrier in wheat breeding. This is an excellent example of how to achieve technology transfer from research lab through to practical deployment."
BBSRC is a partner in Global Food Security, a multi-agency programme that brings together the food-related research interests of Research Councils, Government Departments and Executive Agencies.
The genome sequences released are five-fold coverage of the reference bread wheat variety Chinese Spring line 42. This gives the researchers access to an estimated 95% of genes in this variety of wheat. The International Wheat Genome Sequencing Consortium, which the UK participates in, is working towards a complete genome sequence of Chinese Spring. The full sequenced genome requires further read-throughs, assembly of the data into chromosomes and significant work to fully annotate the sequence data.
jueves, 26 de agosto de 2010
* Pairs of closely orbiting binary stars show telltale debris rings that scientists believe are the smashed remains of planets.
* Tidally locked binary stars seem to be poor places to look for planets in Earth-like orbits.
Stars that are as old as the ones studied have long ago shed their dusty debris rings, which provided the raw materials for the formation of their systems. That implies that something is happening to refresh the debris disk late in the game.
For a planet to be habitable, one parent star seems to be a better bet than two, especially if the stars are physically very close.
So concludes researchers who discovered what they believe to be the rocky remains of planets around three pairs of closely orbiting binary stars.
"It really only takes one rogue planet to have its orbit perturbed and that can wreak havoc on the whole system," Jeremy Drake, with the with Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass., told Discovery News.
Drake and colleagues found warm dust disks around sun-like stars that should have long ago dissipated any clouds of debris left over from their formative years.
"The fact that they're old, mature systems must mean that they've made the stuff relatively recently," said Drake.
Planets have been found in systems with two stars, but those stars' orbits are more stable. The type of binary stars targeted in the new study are located only about 2 million miles from their partners -- about 2 percent of the distance between the sun and Earth.
At that range, powerful magnetic fields and strong stellar winds slow the stars' spin, causing them to move even closer together over time. The resulting gravitational disturbances would impact any planets in tow and eventually cause the orbiting bodies to crash into one another.
Astronomers, who used NASA's Spitzer Space Telescope for their study, are hoping to get a more detailed look at the debris rings with another infrared observatory named Herschel. They also plan to develop computer models to better understand the destabilizing effects of the stars on any planets.
The discovery of these late-stage debris rings doesn't bode well for the existence of habitable planets in these systems, adds Drake.
"You could be lucky. You could be not. It is possible that a planetary system in a well evolved state could, over time, be driven to a destructive phase," he said.
NASA's Marc Kuchner, with the Goddard Space Flight Center in Maryland, thinks planets orbiting farther from the parent stars would have a better shot at survival.
"The region near the stars is very unstable, but if you were to go far enough out I'd expect there'd be a large region of stability," Kuchner told Discovery News.
"On these planets, there would be two suns in the sky. At least one sun would be very luminous, highly variable and magnetically active, which might make the surface of the planet a hostile place. But on a planet with oceans, there could be happy sea creatures not too far beneath the protective water," he said.
The research appears in last week's issue of Astrophysical Journal Letters.
* The world's first known cannibals ate each other to satisfy their nutritional needs.
* The cannibals belonged to the species Homo antecessor, related to both Neanderthals and modern humans.
* Homo antecessor appears to have preyed on competing groups, treating victims like any other meat source.
Was the last common ancestor between the African lineage that gave rise to our species and Neanderthals. Click to enlarge this image. The world's first known human cannibals ate each other to satisfy their nutritional needs, concludes a new study of the remains of cannibal feasts consumed about one million years ago.
The humans-as-food determination negates other possibilities, such as cannibalism for ritual's sake, or cannibalism due to starvation. In this oldest known case of humans eating humans, other food was available to the diners, but human flesh was just part of their meat mix.
"These practices were conducted by Homo antecessor, who inhabited Europe one million years ago," according to the research team, led by Eudald Carbonell.
Carbonell, a professor at the University of Rovira and Virgili, and his colleagues added that Homo antecessor was "the last common ancestor between the African lineage that gave rise to our species, Homo sapiens, and the lineage leading to the European Neanderthals of the Upper Pleistocene."
For the study, published in the latest issue of Current Anthropology, the anthropologists analyzed food remains, stone tools, and other finds associated with Homo antecessor at a cave site called Gran Dolina in the Sierra de Atapuerca near Burgos, Spain. An apparent refuse pile containing tools and meat bones from animals also included multiple butchered bones of Homo antecessor individuals.
"Cut marks, peeling, and percussion marks show that the corpses of these individuals were processed in keeping with the mimetic mode used with other mammal carcasses: skinning, defleshing, dismembering, evisceration, and periosteum (membrane that lines bones) and marrow extraction," according to the researchers.
They added that the butchery techniques identified at the site "show the primordial intention of obtaining meat and marrow and maximally exploiting nutrients. Once consumed, human and nonhuman remains were dumped, mixing them together with lithic tools."
The other bones belonged to animals such as ancient bears, wolves, foxes, mammoths, lynx and more.
The bones and many stone tools indicate this was a campsite. All human butchering took place inside the cave.
"Other small-sized animals were processed in the same way," the scientists wrote. "These data suggest that they (Homo antecessor) practiced gastronomic cannibalism."
To further support this belief, the researchers point out that the consumed individuals came from a variety of age groups, ranging from young children to young adults.
The living arrangement, choice of prey, hunting and butchering methods all suggest that Homo antecessor lived in cohesive groups that likely would have competed with other Homo antecessor groups.
"Necessarily, a level of behavioral complexity is present among these human groups," the anthropologists believe. "This complexity allows using the cannibalism in response to resources competition with other human groups."
Based on other findings, eating one's enemy for political and nutritional gain was also likely practiced by Neanderthals and early members of our own species, who also practiced cannibalism for other reasons, such as during rituals.
Biologist Steven Vogel at Duke University ruminated on cannibalism in his book Prime Mover: A Natural History of Muscle. Vogel calculated that we'd have to consume too many of our brethren for cannibalism to be a sustainable nutritional source in and of itself.
Instead, humans "muscled our way up the food chain," Vogel said, developing better hunting weapons and other tools to allow almost everything to be on our menus.
Pea-size minerals inside a meteorite are the oldest known material in the solar system, a new study says.
At 4,568.2 million years old, the minerals push back the birth of the solar system by as much as two million years—and suggests that an exploding star injected key materials into our system as it was being born, researchers say.
The 3-pound (1.5-kilogram) parent meteorite, dubbed NWA 2364, was found in 2004 in Morocco and is believed to have originated from the asteroid belt between Mars and Jupiter.
But the tests reveal that telltale mineral lumps inside—called calcium-aluminum inclusions—are from a time before that asteroid belt existed. The minerals may have formed just after part of an interstellar gas and dust cloud, or nebula, had collapsed and formed our sun, as one sun-formation theory goes.
"Soon after the collapse of the solar nebula, matter started to condense as the temperature went down, and these inclusions started forming," said lead study author Audrey Bouvier, a research associate at Arizona State University's Center for Meteorite Studies.
Bouvier and co-author Meenakshi Wadhwa, also of Arizona State, measured ratios of lead isotopes—lighter-or-heavier-than-usual versions of an element—in a single "pristine" inclusion to uncover its birth date, she said.
"This revised age is between 0.3 and 1.9 million years older than previous estimates," she said, "making it the oldest on record."
Supernova Blasted Solar System Into Existence?
Two million years is a drop in the bucket in cosmic time, but it could have major ramifications for how scientists think the solar system was born.
Again, it comes down to isotopes—in this case, iron-60, which forms when massive stars go supernova, exploding at the ends of their lives.
Previous studies by other scientists of iron-60 isotopes in mineral inclusions in meteorites found that the inclusions had formed roughly two million years after what was thought to have been the birth of the solar system.
But because the solar system is now apparently up to two million years older than previously thought, the abundance of iron-60 estimated from the inclusions must be extrapolated back another two million years. Since iron-60 degrades by half every two million years, the revised initial quantity of iron-60 in the solar system is almost double previous estimates.
The only thing that could have put so much iron-60 into the nascent solar system, she added, is a nearby supernova.
If true, the finding supports a theory that a supernova seeded the ancient solar nebula with heavy metals and possibly triggered its collapse nearly 4.57 billion years ago.
"I think it is important that people understand that this matter now present in our solar system has been brought in by other stars," Bouvier said.
"Massive stars may have exploded nearby but not close enough to destroy it—but instead brought in these key elements for planet formation and life."
The findings on the ancient solar system material were published August 22 in the journal Nature Geoscience.
The dominant evolutionary theory for Earth’s most successful creatures, and a proposed explanation for human altruism, is under attack.
For decades, selflessness — as exhibited in eusocial insect colonies where workers sacrifice themselves for the greater good — has been explained in terms of genetic relatedness. Called kin selection, it was a neat solution to the conundrum of selflessness in what was supposedly an every-animal-for-itself evolutionary battle.
One early proponent was now-legendary Harvard biologist E.O. Wilson, a founder of modern sociobiology. Now Wilson is leading the counterattack.
“For the past four decades kin selection theory … has been the major theoretical attempt to explain the evolution of eusociality,” write Wilson and Harvard theoretical biologists Martin Nowak and Corina Tarnita in an August 25 Nature paper. “Here we show the limitations of its approach.”
According to the standard metric of reproductive fitness, insects that altruistically contribute to their community’s welfare but don’t themselves reproduce score a zero. They shouldn’t exist, except as aberrations — but they’re common, and their colonies are fabulously successful. Just two percent of insects are eusocial, but they account for two-thirds of all insect biomass.
Kin selection made sense of this by targeting evolution at shared genes, and portraying individuals and groups as mere vessels for those genes. Before long, kin selection was a cornerstone of evolutionary biology. It was invoked to explain social and cooperative behavior across the animal kingdom, even in humans.
But according to Wilson, Nowak and Tarnita, the great limitation of kin selection is that it simply doesn’t fit the data.
Bochum researchers have discovered how natural antifreeze works to protect fish in the icy waters of the Arctic Ocean from freezing to death. They were able to observe that an antifreeze protein in the fish's blood affects the water molecules in its vicinity such that they cannot freeze, and everything remains fluid. Here, there is no chemical bond between protein and water -- the mere presence of the protein is sufficient.
Together with cooperation partners from the U.S., the researchers surrounding Prof. Dr. Martina Havenith (Physical Chemistry II of the RUB) describe their discovery in a Rapid Communication in the Journal of the American Chemical Society (JACS).
Better than household antifreeze
Temperatures of minus 1.8 °C should really be enough to freeze any fish: the freezing point of fish blood is about minus 0.9 °C. How Antarctic fish are able to keep moving at these temperatures has interested researchers for a long time. As long as 50 years ago, special frost protection proteins were found in the blood of these fish. These so-called anti-freeze proteins work better than any household antifreeze. How they work, however, was still unclear.
The Bochum researchers used a special technique, terahertz spectroscopy, to unravel the underlying mechanism. With the aid of terahertz radiation, the collective motion of water molecules and proteins can be recorded. Thus, the working group has already been able to show that water molecules, which usually perform a permanent dance in liquid water, and constantly enter new bonds, dance a more ordered dance in the presence of proteins -- "the disco dance becomes a minuet" says Prof. Havenith.
Souvenir from an Antarctic expedition
The subject of the current investigations was the anti-freeze glycoproteins of the Antarctic toothfish Dissostichus mawsoni, which one of the American partners, Arthur L. Devries, had fished himself on an Antarctic expedition.
"We could see that the protein has an especially long-range effect on the water molecules around it. We speak of an extended dynamical hydration shell," says co-author Konrad Meister.
"This effect, which prevents ice crystallization, is even more pronounced at low temperatures than at room temperature," adds Prof. Havenith.
Nevertheless, to freeze the water, lower temperatures would be necessary. Complexation of the AFP by borate strongly reduces the antifreeze activity. In this case, the researchers also found no change in the terahertz dance. The researchers' results provide evidence for a new model of how AFGPs prevent water from freezing: Antifreeze activity is not achieved by a single molecular binding between the protein and the water, but instead AFP perturbs the aqueous solvent over long distances. The investigation demonstrated for the first time a direct link between the function of a protein and its signature in the terahertz range.
The studies were funded by the Volkswagen Foundation.
The smallest frog in the Old World (Asia, Africa and Europe) and one of the world's tiniest was discovered inside and around pitcher plants in the heath forests of the Southeast Asian island of Borneo. The pea-sized amphibian is a species of microhylid, which, as the name suggests, is composed of miniature frogs under 15 millimeters.
The discovery, published in the taxonomy journal Zootaxa, was made by Drs. Indraneil Das and Alexander Haas of the Institute of Biodiversity and Environmental Conservation at the Universiti Malaysia Sarawak, and Biozentrum Grindel und Zoologisches Museum of Hamburg, respectively, with support from the Volkswagen Foundation. Dr. Das is also leading one of the scientific teams that is searching for the world's lost amphibians, a campaign organized by Conservation International and IUCN's Amphibians Specialist Group.
"I saw some specimens in museum collections that are over 100 years old. Scientists presumably thought they were juveniles of other species, but it turns out they are adults of this newly-discovered micro species," said Dr. Das.
The mini frogs (Microhyla nepenthicola) were found on the edge of a road leading to the summit of the Gunung Serapi mountain, which lies within Kubah National Park. The new species was named after the plant on which it depends to live, the Nepenthes ampullaria, one of many species of pitcher plants in Borneo, which has a globular pitcher and grows in damp, shady forests. The frogs deposit their eggs on the sides of the pitcher, and tadpoles grow in the liquid accumulated inside the plant.
Adult males of the new species range between 10.6 and 12.8 mm -- about the size of a pea. Because they are so tiny, finding them proved to be a challenge. The frogs were tracked by their call, and then made to jump onto a piece of white cloth to be examined closer. The singing normally starts at dusk, with males gathering within and around the pitcher plants. They call in a series of harsh rasping notes that last for a few minutes with brief intervals of silence. This "amphibian symphony" goes on from sundown until peaking in the early hours of the evening.
Amphibians are the most threatened group of animals, with a third of them in danger of extinction. They provide important services to humans such as controlling insects that spread disease and damage crops and helping to maintain healthy freshwater systems. Teams of scientists from Conservation International and IUCN's Amphibian Specialist Group around the world have recently launched an unprecedented search in the hope of rediscovering 100 species of "lost" amphibians -- animals considered potentially extinct but that may be holding on in a few remote places.
The search, which is taking place in 20 countries on five continents, will help scientists to understand the recent amphibian extinction crisis. Dr. Das is leading a team of scientists who will search for the Sambas Stream Toad (Ansonia latidisca) in Indonesia and Malaysia in September. The toad was last seen in the 1950s. It is believed that increased sedimentation in streams after logging may have contributed to the decline of its population.
"Amphibians are quite sensitive to changes in their surroundings, so we hope the discovery of these miniature frogs will help us to understand what changes in the global environment are having an impact on these fascinating animals," said Conservation International's Dr. Robin Moore, who has organized the search on behalf of IUCN's Amphibian Specialist Group.
Timescales of early Solar System processes rely on precise, accurate and consistent ages obtained with radiometric dating. However, recent advances in instrumentation now allow scientists to make more precise measurements, some of which are revealing inconsistencies in the ages of samples. Seeking better constraints on the age of the Solar System, Arizona State University researchers Audrey Bouvier and Meenakshi Wadhwa analyzed meteorite Northwest Africa (NWA) 2364 and found that the age of the Solar System predates previous estimates by up to 1.9 million years.
By using a dating technique known as lead-lead dating, Bouvier and Wadhwa were able to calculate the age of a calcium-aluminum-rich inclusion (CAI) contained within the Northwest Africa 2364 chondritic meteorite. These CAIs are thought to be the first solids to condense from the cooling protoplanetary disk during the birth of the Solar System.
The study's findings, published online on August 22 in Nature Geoscience, fix the age of the Solar System at 4.5682 billion years old, between 0.3 and 1.9 million years older than previous estimates. This relatively small revision to the currently accepted age of about 4.56 billion years is significant since some of the most important events that shaped the Solar System occurred within the first ~10 million years of its formation.
"This relatively small age adjustment means that there was as much as twice the amount of iron-60, a certain short-lived isotope of iron, in the early Solar System than previously determined. This higher initial abundance of this isotope in the Solar System can only be explained by supernova injection," said Bouvier, a faculty research associate in the School of Earth and Space Exploration (SESE) in ASU's College of Liberal Arts and Sciences. "This supernova event, and possibly others, could have triggered the formation of the Solar System. By studying meteorites and their isotopic characteristics, we bring new clues about the stellar environment of our Sun at birth."
According to Meenakshi Wadhwa, professor in SESE and director of the Center for Meteorite Studies, "This work also helps to resolve some long-standing inconsistencies in early Solar System time scales as obtained by different high-resolution chronometers. However, there is certainly room for future studies. In particular, it will be important to conduct high precision chronologic investigations of CAIs from other pristine meteorites. We also need to understand the reasons for why the CAIs measured previously from two other chondritic meteorites, Allende and Efremovka, have yielded younger ages."
One significant aspect of this study is that it is the first published lead-lead isotopic investigation that takes into account the possible variation of the uranium isotope composition. Earlier work conducted in Wadhwa's laboratory by ASU graduate student Gregory Brennecka, in collaboration with SESE professor Ariel Anbar, has shown that the uranium isotope composition of CAIs, long assumed to be constant, can in fact be highly variable and this has important implications for the calculation of the precise lead-lead ages of these objects.
Using the relationship demonstrated by Brennecka and colleagues between the uranium isotope composition and other geochemical indicators in CAIs, Bouvier and Wadhwa inferred a uranium isotope composition for the CAI for which they reported the lead-lead age. Future work at ASU will focus on development of analytical techniques for the direct measurement of the precise uranium isotope composition of CAIs for which lead-lead isotopic investigations are being conducted.
"Our work can help researchers better understand the sequence of events that took place within the first few million years of the Solar system formation, such as the accretion and melting of planetary bodies," Bouvier said. "All these processes happened extremely rapidly, and only by reaching such a precision on isotopic measurements and chronology can we find out about these processes of planetary formation."