martes, 16 de junio de 2009
Surface electron band structure of bismuth telluride. (Credit: Image courtesy of Yulin Chen and Z. X. Shen)
Move over, silicon—it may be time to give the Valley a new name. Physicists at the Department of Energy's (DOE) SLAC National Accelerator Laboratory and Stanford University have confirmed the existence of a type of material that could one day provide dramatically faster, more efficient computer chips.
Recently-predicted and much-sought, the material allows electrons on its surface to travel with no loss of energy at room temperatures and can be fabricated using existing semiconductor technologies. Such material could provide a leap in microchip speeds, and even become the bedrock of an entirely new kind of computing industry based on spintronics, the next evolution of electronics.
Physicists Yulin Chen, Zhi-Xun Shen and their colleagues tested the behavior of electrons in the compound bismuth telluride. The results, published online June 11 in Science Express, show a clear signature of what is called a topological insulator, a material that enables the free flow of electrons across its surface with no loss of energy.
The discovery was the result of teamwork between theoretical and experimental physicists at the Stanford Institute for Materials & Energy Science, a joint SLAC-Stanford institute. In recent months, SIMES theorist Shoucheng Zhang and colleagues predicted that several bismuth and antimony compounds would act as topological insulators at room-temperature. The new paper confirms that prediction in bismuth telluride. "The working style of SIMES is perfect," Chen said. "Theorists, experimentalists, and sample growers can collaborate in a broad sense."
The experimenters examined bismuth telluride samples using X-rays from the Stanford Synchrotron Radiation Lightsource at SLAC and the Advanced Light Source at Lawrence Berkeley National Laboratory. When Chen and his colleagues investigated the electrons' behavior, they saw the clear signature of a topological insulator. Not only that, the group discovered that the reality of bismuth telluride was even better than theory.
"The theorists were very close," Chen said, "but there was a quantitative difference." The experiments showed that bismuth telluride could tolerate even higher temperatures than theorists had predicted. "This means that the material is closer to application than we thought," Chen said.
This magic is possible thanks to surprisingly well-behaved electrons. The quantum spin of each electron is aligned with the electron's motion—a phenomenon called the quantum spin Hall effect. This alignment is a key component in creating spintronics devices, new kinds of devices that go beyond standard electronics. "When you hit something, there's usually scattering, some possibility of bouncing back," explained theorist Xiaoliang Qi. "But the quantum spin Hall effect means that you can't reflect to exactly the reverse path." As a dramatic consequence, electrons flow without resistance. Put a voltage on a topological insulator, and this special spin current will flow without heating the material or dissipating.
Topological insulators aren't conventional superconductors nor fodder for super-efficient power lines, as they can only carry small currents, but they could pave the way for a paradigm shift in microchip development. "This could lead to new applications of spintronics, or using the electron spin to carry information," Qi said. "Whether or not it can build better wires, I'm optimistic it can lead to new devices, transistors, and spintronics devices."
Fortunately for real-world applications, bismuth telluride is fairly simple to grow and work with. Chen said, "It's a three-dimensional material, so it's easy to fabricate with the current mature semiconductor technology. It's also easy to dope—you can tune the properties relatively easily."
"This is already a very exciting thing," he said, adding that the material "could let us make a device with new operating principles."
The high quality bismuth telluride samples were grown at SIMES by James Analytis, Ian Fisher and colleagues.
SIMES, the Stanford Synchrotron Radiation Lightsource at SLAC, and the Advanced Light Source at Lawrence Berkeley National Laboratory are supported by the Office of Basic Energy Sciences within the DOE Office of Science.
This is University of Chicago postdoctoral scientist Philipp Heck with a sample of the Allende meteorite. The dark portions of the meteorite contain dust grains that formed before the birth of the solar system. The Allenda meteorite is of the same type as the Murchison meteorite, the subject of Heck's Astrophysical Journal study. (Credit: Dan Dry)The interstellar stuff that became incorporated into the planets and life on Earth has younger cosmic roots than theories predict, according to the University of Chicago postdoctoral scholar Philipp Heck and his international team of colleagues.
Heck and his colleagues examined 22 interstellar grains from the Murchison meteorite for their analysis. Dying sun-like stars flung the Murchison grains into space more than 4.5 billion years ago, before the birth of the solar system. Scientists know the grains formed outside the solar system because of their exotic composition.
"The concentration of neon, produced during cosmic-ray irradiation, allows us to determine the time a grain has spent in interstellar space," Heck said. His team determined that 17 of the grains spent somewhere between three million and 200 million years in interstellar space, far less than the theoretical estimates of approximately 500 million years. Only three grains met interstellar duration expectations (two grains yielded no reliable age).
"The knowledge of this lifetime is essential for an improved understanding of interstellar processes, and to better contain the timing of formation processes of the solar system," Heck said. A period of intense star formation that preceded the sun's birth may have produced large quantities of dust, thus accounting for the timing discrepancy, according to the research team.
Funding sources for the research include the National Aeronautics and Space Administration, Swiss National Science Foundation, the Australian National University, and the Brazilian National Council for Scientific and Technological Development.
Deep optical image of the Antennae galaxies. New tidal debris is found at the northern tip. (Credit: Stony Brook University)
Astronomers have discovered new tidal debris stripped away from colliding galaxies. New debris images are of special interest since they show the full history of galaxy collisions and resultant starburst activities, which are important in 'growing' galaxies in the early Universe.
In this study, new tidal debris were found with 8.2-meter Subaru telescope on Mauna Kea, Hawaii, which is operated by the National Astronomical Observatory of Japan. The international team took extremely deep exposures of archetypal colliding galaxies, including "the Antennae" galaxies in constellation Corvus (65 million light years away from us), "Arp 220" in constellation Serpens (250 million light years) and "Mrk 231" in constellation Big Dipper (590 million light years), and 10 additional objects. Often seen in public media and textbooks, these galaxies are well-known galaxy collisions.
The research will be being presented at the 214th annual American Astronomical Society meeting in Pasadena, California by Drs. Jin Koda at Stony Brook University, Long Island, New York; Nick Scoville of California Institute of Technology; Yoshiaki Taniguchi of Ehime University, Ehime, Japan; and, the COSMOS survey team.
"We did not expect such enormous debris fields around these famous objects," says Dr. Koda, Assistant Professor of Astronomy at Stony Brook University. "For instance, the Antennae – the name came from its resemblance of insect ‘antennae’ – was discovered early in 18th century by William Herschel, and has been observed repeatedly since then."
Colliding galaxies eventually merge, and become a single galaxy. When the orbit and rotation synchronize, galaxies merge quickly. New tidal tails therefore indicate the quick merging, which could be the trigger of starburst activities in Ultra Luminous Infrared Galaxy (ULIRG). Further studies and detailed comparison with theoretical model may reveal the process of galaxy formation and starbursts activities in the early Universe.
"Arp 220 is the most famous ULIRG," says Dr. Taniguchi, who is Professor of Ehime University in Japan. "ULIRGs are very likely the dominant mode of cosmic star formation in the early Universe, and Arp 220 is the key object to understand starburst activities in ULIRGs."
"The new images allow us to fully chart the orbital paths of the colliding galaxies before they merge, thus turning back the clock on each merging system," says Dr. Scoville, the Francis L. Moseley professor of astronomy at Caltech. "This is equivalent to finally being able to trace the skid marks on the road when investigating a car wreck."
According to Dr. Koda, the extent of the debris had not been seen in earlier imaging of these famous objects.
"Subaru’s sensitive wide-field camera was necessary to detect and properly analyze this faint, huge, debris," he said. "In fact, most debris are extended a few times bigger than our own Galaxy. We were ambitious to look for unknown debris, but even we were surprised to see the extent of debris in many already famous objects."
Galactic collisions are one of the most critical processes in galaxy formation and evolution in the early Universe. However, not all galactic collisions end up such large tidal debris.
‘The orbit and rotation of colliding galaxies are the keys," says Dr. Koda. "Theory predicts that large debris are produced only when the orbit and galactic rotation synchronize each other. New tidal debris are of significant importance since they put significant constrains on the orbit and history of the galactic collisions."
A controversial new theory suggests that long-term changes (the secular variation) in Earth's main magnetic field are possibly induced by the oceans' circulation. (Credit: iStockphoto/René Mansi)
Some 400 years of discussion and we’re still not sure what creates the Earth’s magnetic field, and thus the magnetosphere, despite the importance of the latter as the only buffer between us and deadly solar wind of charged particles (made up of electrons and protons). New research raises question marks about the forces behind the magnetic field and the structure of Earth itself.
The controversial new paper, published in New Journal of Physics (co-owned by the Institute of Physics and the German Physical Society), will deflect geophysicists’ attention from postulated motion of conducting fluids in the Earth’s core, the twentieth century’s answer to the mysteries of geomagnetism and magnetosphere.
Professor Gregory Ryskin from the School of Engineering and Applied Science at Northwestern University in Illinois, US, has defied the long-standing convention by applying equations from magnetohydrodynamics to our oceans’ salt water (which conducts electricity) and found that the long-term changes (the secular variation) in the Earth’s main magnetic field are possibly induced by our oceans’ circulation.
With calculations thus confirming Ryskin’s suspicions, there were also time and space correlations - specific indications of the integral relationship between the oceans and our magnetospheric buffer. For example, researchers had recorded changes in the intensity of current circulation in the North Atlantic; Ryskin shows that these appear strongly correlated with sharp changes in the rate of geomagnetic secular variation (“geomagnetic jerks”).
Tim Smith, senior publisher of the New Journal of Physics, said, "This article is controversial and will no doubt cause vigorous debate, and possibly strong opposition, from some parts of the geomagnetism community. As the author acknowledges, the results by no means constitute a proof but they do suggest the need for further research into the possibility of a direct connection between ocean flow and the secular variation of the geomagnetic field."
In the early 1920s, Einstein highlighted the large challenge that understanding our Magnetosphere poses. It was later suggested that the Earth’s magnetic field could be a result of the flow of electrically-conducting fluid deep inside the Earth acting as a dynamo.
In the second half of the twentieth century, the dynamo theory, describing the process through which a rotating, convecting, and electrically conducting fluid acts to maintain a magnetic field, was used to explain how hot iron in the outer core of the Earth creates a magnetosphere.
The journal paper also raises questions about the structure of our Earth’s core.
Familiar text book images that illustrate a flow of hot and highly electrically-conducting fluid at the core of the Earth are based on conjecture and could now be rendered invalid. As the flow of fluids at the Earth’s core cannot be measured or observed, theories about changes in the magnetosphere have been used, inversely, to infer the existence of such flow at the core of the Earth.
While Ryskin’s research looks only at long-term changes in the Earth’s magnetic field, he points out that, “If secular variation is caused by the ocean flow, the entire concept of the dynamo operating in the Earth’s core is called into question: there exists no other evidence of hydrodynamic flow in the core.”
On a practical level, it means the next time you use a compass you might need to thank the seas and oceans for influencing the force necessary to guide the way.
Dr Raymond Shaw, professor of atmospheric physics at Michigan Technological University, said, “It should be kept in mind that the idea Professor Ryskin is proposing in his paper, if valid, has the potential to deem irrelevant the ruling paradigm of geomagnetism, so it will be no surprise to find individuals who are strongly opposed or critical."
On a prior trip to that park, I encountered several snazzy reptiles to admire, including the aptly-named leaf tailed gecko, Uroplatus phantasticus, pictured above.
No, that brown thing in the foreground isn't a leaf—that's really the gecko's tail! This leaf mimic is soft and fleshy and feels like velvet.
This next shot shows you this amazing critter's unbelievable mug. Even its eye is an example of astonishing adaptation to blend in with its surroundings.
The tail of another Uroplatus I found in Ranomafana resembles a well-decayed dead leaf. It's no small effort to look so unkempt—this is some highly evolved camouflage!
If you appreciate biodiversity and want to encounter unusual critters, place Ranomafana National Park in Madagascar near the top of your destinations life list. The Institute for the Conservation of Tropical Environments has helped me obtain permits and maintains a field station in the park. Check the Institute's website if you're planning a trip to Madagascar, or if you'd like to learn more about research in Ranomafana.
Nanotube Memory Array
Check video here:
Your blog might be popular today, but how will you preserve it for future generations? Enter memory chips that can last for over a billion years.
Newly developed nanoscale devices can pack a trillion bits of data—equal to about 25 million textbook pages—into a square inch (6.5 square centimeters) of material made from multiwalled carbon nanotubes.
Storage media can become degraded and unreadable due to environmental conditions, mechanical wear, and other factors. Normally, the more that's packed onto a memory chip, the faster it wears out.
DVDs, for instance, can hold a hundred billion bits per square inch, but are only expected to remain readable for 30 years.
But carbon nanotubes are formed by the strongest bonds in nature—carbon-carbon bonds—making them corrosion-proof, researchers say.
The system's longevity may be overkill, admitted research leader Alex Zettl, but he said that long life is necessary for fundamentally sound storage.
"There are several components to a memory storage system, but the heart of the system, where the bits are actually stored, should be as robust and long-lived as possible," said Zettl, a physicist at the at the Lawrence Berkeley National Laboratory at the University of California, Berkeley.
"A billion-year lifetime for this critical component allows us to call this part done and concentrate on the other components."
The project is still in the research stage, but an early model may be available in two years.
To create the new chip, the researchers used a series of carbon nanotubes containing crystalline iron nanoparticles—each a fraction of the width of a human hair. The nanotubes move back and forth when low voltage is applied to the chip.
The position of the nanoparticle can be precisely read by measuring electrical resistance, and its position can indicate either a state of zero or one in binary code.
Zettl's team compared the new chip to the still-legible writing at Egypt's 4,000-year-old Temple of Karnak, as well as to England's well-preserved Domesday Book of 1086.
A reliable archival memory device should be one that would last at least several centuries, if not several millennia, Zettl said.
Peru's Machu Picchu was an Inca pilgrimage site and a scaled-down version of a mythic landscape, according to a controversial new study.
The finding challenges the conventional view that Machu Picchu was a royal estate of the Inca ruler Pachacuti, who built it around A.D. 1460.
"I believe that much of the sacred space of the Incas has still to be recognized as such," said study author Giulio Magli, an astrophysicist at the Polytechnic Institute in Milan, Italy.
Perched on a mountain ridge some 8,000 feet (2,430 meters) above sea level, Machu Picchu was for years lost to history after the Spanish conquest. The site gained notoriety following a 1911 visit by U.S. explorer Hiram Bingham, whose Machu Picchu excavation was funded in part by the National Geographic Society. (The National Geographic Society owns National Geographic News.)
Now a popular tourist destination, Machu Picchu's original purpose has been a source of much speculation and debate.
According to Magli, Machu Picchu was conceived and built specifically as a pilgrimage site where worshippers could symbolically relive an important journey purportedly taken by their ancestors.
Harrowing Journey Remade in Rock?
In Inca mythology, the first Inca were created on Bolivia's Island of the Sun on Lake Titicaca.
From there, they undertook a harrowing journey beneath the Earth and emerged at a place called Tampu-tocco, close to the future site of the Inca capital Cusco.
The first Inca then traveled to the summit of a nearby hill called Huanacauri, where one of them was turned into stone and became an important shrine.
Magli argues that certain structures at Machu Picchu symbolize important landmarks of this journey.
For instance, a disorderly pile of stones represents the underground "void" that the first Inca traveled through.
"Pacha-Mama, or Mother Earth, was associated with disorder," Magli said.
Similarly, a plaza at Machu Picchu represents Tampu-tocco, and a stone pyramid at the site doubles for the Huanacauri hill, Magli added.
Visitors to Machu Picchu enter through a gate at the complex's southeastern end. The layout then coaxes them northwest. This, Magli said, is no accident.
In his study, published on the Web site arXiv.org, Magli argues that Machu Picchu's southeast-northwest layout is meant to replicate the path of the sun across the sky in Inca country, averaged over the course of a year.
Southeast-northwest is also the direction traveled by the first Inca during their mythic journey—again, likely influenced by the sun, which was worshipped as a god.
As a sacred site, Machu Picchu may have been open to commoners and highborn alike, much like a known Inca pilgrimage destination on the Island of the Sun, Magli said.
"As far as we know, the pilgrimage to the Island of the Sun was open to all, but not all were admitted in the innermost sanctuary," he said. "Perhaps the same occurred in Machu Picchu."
Archaeologist Richard Burger of Yale University is unconvinced.
Magli "does make the argument for the importance of celestial movements in relation to specific buildings," said Burger, who was not involved in the study.
Still, Magli's arguments "are not incompatible with the interpretation of Machu Picchu as a tropical retreat for the royal court any more than the presence of religious art and architecture at Versailles is incompatible with its role as a royal palace."
Archaeologists have begun excavating more of the famed terra-cotta warriors, life-size clay figures created to guard the tomb of China's first emperor. Video.
Chinese archaeologists have started to excavate more of the life-size terra-cotta warriors at the famed ancient tomb of the country's first emperor.
Chinas state TV on Sunday showed archaeologists uncovering more of the elaborately carved soldiers to add to the 1-thousand plus statues already excavated.
SOUNDBITE: (Mandarin) Cao Wei, Deputy Curator, Qinshihuang Terra-cotta Warriors and Horses Musuem "From this site, we can see in one hole two horses. We already knew that these two horses existed here. In the last two excavations we never came across these two horses. Of course, perhaps there might be more unsolved mysteries."
The new dig, which started on Saturday, is the third undertaken since the tomb was first uncovered in 1974 and will focus on a 2,100-square foot patch within the tomb's main pit that holds the bulk of the warriors.
Exhibited where they were found, and protected inside a massive building, the tomb and its accompanying museum are among China's biggest tourist draws.
An exhibition of 15 figures and dozens of artifacts from the tomb broke ticket sale records when it traveled to London and California. The exhibit is now at the Houston Museum of Natural Science, and in November, moves to the National Geographic Museum in Washington, DC.
At between 5 feet 8 inches and 6 and a half feet tall, the statues weigh between 3 and four hundred pounds each. In all, the tomb's three pits are thought to hold 7-thousand life-sized figures. It is believed they were created by
Emperor Qin Shihuang to protect him in the afterlife.
No two figures are alike, and craftsmen are believed to have modeled them after a real army.
A fourth pit at the tomb was apparently left empty by its builders, while Qin's actual burial chamber has yet to be excavated.
Given away by strange, crop circle-like formations seen from the air, a huge prehistoric ceremonial complex discovered in southern England has taken archaeologists by surprise.
A thousand years older than nearby Stonehenge, the site includes the remains of wooden temples and two massive, 6,000-year-old tombs that are among "Britain's first architecture," according to archaeologist Helen Wickstead, leader of the Damerham Archaeology Project.
For such a site to have lain hidden for so long is "completely amazing," said Wickstead, of Kingston University in London.
Archaeologist Joshua Pollard, who was not involved in the find, agreed. The discovery is "remarkable," he said, given the decades of intense archaeological attention to the greater Stonehenge region.The Damerham tombs have yet to be excavated, but experts say the long barrows likely contain chambers—probably carved into chalk bedrock and reinforced with wood—filled with human bones associated with ancestor worship.
During the late Stone Age, it's believed, people in the region left their dead in the open to be picked clean by birds and other animals.
Skulls and other bones of people who were for some reason deemed significant were later placed inside the burial mounds, Wickstead explained.
"These are bone houses, in a way," she said. "Instead of whole bodies, [the tombs contain] parts of ancestors."
Later Monuments, Long Occupation
Other finds suggest the site remained an important focus for prehistoric farming communities well into the Bronze Age (roughly 2000 to 700 B.C. in Britain).
Near the tombs are two large, round, ditch-encircled structures—the largest circular enclosure being about 190 feet (57 meters) wide.
Nonintrusive electromagnetic surveys show signs of postholes, suggesting rings of upright timber once stood within the circles—further evidence of the Damerham site's ceremonial or sacred role.
Pollard, of the University of Bristol, likened the features to smaller versions of Woodhenge, a timber-circle temple at the Stonehenge World Heritage site.
Damerham also includes a highly unusual, and so far baffling, U-shaped enclosure with postholes dated to the Bronze Age, project leader Wickstead said.
The circled outlines of 26 Bronze Age burial mounds also dot the site, which is littered with stone flint tools and shattered examples of the earliest known type of pottery in Britain.
Evidence of prehistoric agricultural fields suggest the area was at least partly cultivated by the time the Romans invaded Britain in the first century A.D., generally considered to be the end of the regions' prehistoric period.
Riches Beneath Ravaged Surface?
The actual barrows and mounds near Damerham have been diminished by centuries of plowing, but that, ironically, may make them much more valuable archaeologically, according to Pollard, of the University of Bristol.
The mounds would have been irresistible advertisements for tomb raiders, who in the 18th and 19th centuries targeted Bronze Age burials for their ornate grave goods.
And "even if the mounds are gone, you are still going to have primary burials [as opposed to those later added on top] which will have been dug into the chalk, so are going to survive," Pollard added.
The contents of the Stone Age long barrows should likewise have survived, he said. "I think there's good reason to assume you might have the main wooden mortuary chambers with burial deposits," he said.
Redrawing the Map
An administrative oversight may also be partly responsible for the site remaining hidden—and assumedly pristine, at least underground—project leader Wickstead said.
When prehistoric sites in the area were being mapped and documented in the 1890s, a county-border change placed Damerham within Hampshire rather than Stonehenge's Wiltshire, she said.
"Perhaps people in Hampshire thought [the monuments] were someone else's problem."
This lucky conjunction of plowing and politics obscured Damerham's prehistoric heritage until now.
The site shows that "a lot of the ceremonial activity isn't necessarily located in these big centers," such as Stonehenge, Pollard said. "But there are other locations where people are congregating and constructing ceremonial monuments.
"I think everybody assumed such monument complexes were known about or had already been discovered," added Pollard, a co-leader of the Stonehenge Riverside Project, which is funded in part by the National Geographic Society. (The National Geographic Society owns National Geographic News.)
At the 500-acre (200-hectare) site, outlines of the structures were spotted "etched" into farmland near the village of Damerham, some 15 miles (24 kilometers) from Stonehenge (Damerham map).
Discovered during a routine aerial survey by English Heritage, the U.K. government's historic-preservation agency, the "crop circles" are the results of buried archaeological structures interfering with plant growth. True crop circles are vast designs created by flattening crops.
The central features are two great tombs topped by massive mounds—made shorter by centuries of plowing—called long barrows. The larger of the two tombs is 70 meters (230 feet) long.
Estimated at 6,000 years old, based on the dates of similar tombs around the United Kingdom, the long barrows are also the oldest elements of the complex.
Such oblong burial mounds are very rare finds, and are the country's earliest known architectural form, Wickstead said. The last full-scale long barrow excavation was in the 1950s, she added.
The International Space Station has a beefed up crew that has gone from three to six astronauts, now that the construction of the $100 billion space laboratory is nearly complete. We are told that the station crew will be able to spend more time doing medical and biological experiments in the station's microgravity environment to prepare humans for journeys to the moon and Mars.
We are “learning to live in space” is the shorthand justification for why we have a space station. But are the right questions behind the ISS experiments being asked? Exactly how salient is the research on the ISS when applied to human interplanetary travel?
Let me draw a parallel to something that’s not too far from where I live in Maryland; the Chesapeake and Ohio Canal that follows the Potomac River. Construction of the canal started in the early 1800s as part of a post Revolutionary War vision by Thomas Jefferson to connect the eastern seaboard to the Great Lakes and Ohio River. Passengers rode on mule drawn barges for several days to go from Washington D.C. northwest to Cumberland, Maryland, 185 miles away. It was not long before the much speedier steam locomotive antiquated this form of human transportation. The canal was abandoned before it ever reached the Ohio River.
When we look at the long term future of manned interplanetary travel, astronauts will not live in weightlessness for extended periods, any more that travelers today expect to spend several days on a mule draw barge to go the distance a jetliner covers is less than 30 minutes.
Let’s face it, interplanetary travel is deadly. Crews will be in mortal danger from radiation blasted into space by solar flares and coronal mass ejections. Any sort of mechanical breakdown of life support systems en route to Mars could quickly become disastrous. There are also practical matters about the amount of payload mass that needs to be devoted to food and water, and the psychological well-being of a crew cooped up in a rancher house-volume habitat module for many months. So the less time spent in transit, the better, and the more likely the success of the mission.
This is simple rocket science. The propulsion systems for going to Mars will need to be much more mature than they are today. With today’s chemical rockets transit time are six to nine months. But with adequately funded research and development you could have advanced propulsion that cuts travel time down to a matter of weeks.
Among the more visionary ideas: magnetized-beam plasma propulsion where a vehicle is pushed along by a directed energy beam from Earth; a gas core fission reactor ejecting hot hydrogen fuel at a powerful rate; a fusion-reactor rocket that expels a super-heated plasma at high velocities.
My favorite farout propulsion idea comes from the imagination of Arthur C. Clarke who, in his 1975 novel Imperial Earth, described a black hole drive. This is an artificially made black hole which has the mass of a mountain embedded inside an event horizon the size of the head of a pin. Soda straw diameter tubes feed fuel into it. The story’s interplanetary cruise liner can travel from Earth to Saturn in 20 days!
We simply will not make the treacherous journey to Mars until there is alternative rocket propulsion to today’s wimpy “mule-barge” chemical rockets.
What's more, It's not clear what further space station research in "living in weightlessness" will yield. After four decades of American and Russian space travel, we know by now that long-term exposure to weightlessness is not good for the body. It deteriorates the heart, bone density, muscle mass, and the immune system, among many other complications. Minor effects include puffiness in the face, flatulence, weight loss, nasal congestion and sleep disturbance. This is a “ perfect storm” of trouble for people cooped up in a space capsule for months.
What we don’t know at all is what the effects of long term living in the gravitational field of the moon and Mars will do to the body. And, future explorers will spend much more time living in 1/6th (moon) or 1/3rd (Mars) gravity, than in weightless space.
The ISS was supposed to have a centrifuge for replicating the gravitational pull of other worlds and trying it out on animal specimens. The module was built by the Japanese government and named the Centrifuge Accommodation Module. But the lab was never launched because of ISS cost overruns and ISS assemble schedule problems. The $700 million lab is now the world’s most expensive museum piece, put on display in an outdoor exhibit at the Tsukuba Space Center in Japan. (The only saving thought is that we don’t have to go through the charade of watching NASA name the module in a contest. How would Whirlpool have worked? Both for the natural phenomenon and the manufacturer of spinning washing machines and clothes dryers.)
In hindsight, the way for NASA to do this kind of critical research would have been to build the complete space station as a spinning wheel that would create artificial gravity through centrifugal force. The idea is over 100 years old, as first described by the great Russian theoretician Konstantin Tsiolkovsky. It was epitomized in the landmark science fiction film 2001 A Space Odyssey, were a majestic space station cartwheels along to the tune of Johann Strauss’ Blue Danube waltz
Such a station could be rotated at a rate to create a 1/6 pull of gravity on one of those ISS “expedition” missions to replicate the lunar environment. Then the crews could swap out and the space wheel spun up to recreate Mars’ 1/3rd gravity for another expedition team. For expeditions doing non-biological experiments, the space wheel could have been spun up further to create Earth’s pull, and the crew could live and work normally. No need for one of those suction potty chairs.
Such an elaborate facility might not have necessarily been beyond the NASA budget, especially if it were fabricated out of inflatable structures (as envisioned by rocket pioneer Werner Von Braun half a century ago). But the space base we've paid for doesn’t tell us the consequences of living on the moon or Mars for any length of time. Does body degradation decrease proportionally with increasing gravity, or is it more complicated that that?
It’s always possible that I’m being premature until we see what science papers are published in research journals from the ISS experiments. But the bottom line it that humans were not made to live in zero-g any more that we’d expect them to grow gills by living in underwater habitats.
What's more, there's no hurry to get to Mars. It's not like the 1848 California Gold Rush (which, by the way, pushed transportation infrastructure). So, let's not do it until it can be done robustly.
image credit: MGM
Until now, there has been little idea about what a spaceship propelled by a warp drive (or a warpship) would look like. Would it resemble the sleek Starship Enterprise? Or will it be like nothing we've seen before?
After speaking with Dr. Richard Obousy, he shared his concept for a futuristic, yet scientifically accurate, warpship design.
The physics behind the warpship is purely theoretical, however. 'Dark energy' needs to be understood and harnessed, plus vast amounts of energy needs to be generated, meaning the warpship is a technology that could only be conceived in the far future. That said, Dr. Obousy's warpship design uses our current knowledge of spacetime and superstring theory to arrive at this futuristic concept.
So here's your exclusive look at what could be the future warp drive propulsion...
The physics behind the warp drive is, as you'd expect, complex. However, it is hoped that in the future mankind will learn how to harness 'dark energy', an energy that is theorized to permeate through the entire universe. Cosmologists are particularly interested in dark energy as it is most commonly associated with the observed expansion of the universe.
Immediately after the Big Bang, some 13.73 billion years ago, the universe expanded faster than the speed of light, an event called universal inflation. Dark energy (which still has experts baffled as to what it actually is) is theorized to have driven this expansion, and it continues to this day. Much like the 2-dimensional surface of a balloon stretching when being inflated, 3-dimensional space is stretching, propelling the galaxies away from one another.
If an advanced technology could harness this dark energy, a warpship could possibly manipulate the spacetime surrounding it. According to Dr. Obousy, the extra dimensions as predicted by superstring theory could be shrunk and expanded by the warp drive through manipulation of local dark energy. At the front of the warpship spacetime would be compressed, and it would expand behind.
"You can apply the analogy of a surfer riding a 'wave' of spacetime," Dr. Obousy told Discovery Space. This 'wave' would facilitate faster-than-light-speed propulsion without breaking any laws of physics.
The shape of the warpship was chosen to optimize the manipulation of surrounding dark energy, creating a spacetime bubble. How exactly the bubble would be created is still a mystery. But once the bubble gets created, spacetime at the front of the warpship would be compressed, and behind, it would expand. Inside the bubble, spacetime remains unchanged; therefore the warpship floats in the center of stationary space while the bubble moves through spacetime.
The bubble itself, containing the warpship, "drives the spacecraft forwards at arbitrarily high speeds," said Obousy. This means the warpship can travel faster than the speed of light.
To initiate the warp drive, however, vast amounts of energy would be required. Also, there will be some practical issues to overcome, such as preventing the creation of artificial black holes, as well as catastrophic warp bubble collapse when the power is switched off.
A graphic demonstrating how the warpship would effectively be "surfing" on a spacetime wave to achieve faster-than-light-speed travel.
Credit: Richard Obousy Consulting and Alex Szames Antigravite.
Every year, Google Inc. invites a group of global A-listers to its own Davos-style conference to think big thoughts. The event, called Zeitgeist, tends to be as pretentious as its name—captains of industry, finance, and government chattering onstage in front of about 400 of Google’s friends and customers about the fate of the internet and the world.
The 2008 version bordered on the surreal. The stock market was tanking, the bond market had flatlined, and the price of gold was surging to its biggest one-day jump in nearly a decade, an indication that investors everywhere thought the global economy was going to hell.
Yet here was Eric Schmidt,Google’s chairman and CEO, on a sparse stage at the company’s MountainView, California, headquarters, in a green-energy love-in with his counterpart at General Electric Co., Jeff Immelt.The pair bathed in the glow of each other’s affirmation, convinced that the two companies, working together, can save the planet. ( View a graphic showing how much energy Google’s own data centers use.)
“I don’t think this is hard,” Immelt said in response to a question from Al Gore, a Google groupie. “I’d say health care is hard. Solving the U.S.’s health-care system is actually quite difficult. Energy actually isn’t hard. The technology exists; it doesn’t have to be invented. It needs to be applied.… We make the gadgets—smart electric meters, things like that. People like Google can make the software, which makes the system. That’s the key to renewable energy.”
Schmidt and Immelt are betting big that green energy will become the steam engine of the Obama age—the driver of a new industrial revolution that can generate untold amounts of jobs and economic growth while rescuing the earth from global warming. For GE, with its massive energy division, including investments in windmills, air conditioners, and power plants, an interest in grabbing part of the renewable-energy business is a no-brainer. As Immelt tells me in an interview, GEdoesn’t need Google’s technology so much as it needs its cachet.
“Google has a special brand around consumer-user interface, around software and the internet,” he says. “I think that there’s clearly a halo about two great brands when they get together.”
It is Google—with a thinner résumé but an enormous bank account—that is the curiosity. Schmidt’s ambition is to turn Google into the, well, Google of the renewable-energy economy. Just as it imposed order on an unruly
Web, Google is hoping to make sense of an always-on electricity grid and help consumers decide when to power up appliances and plug-in cars and when to turn them off. The company is investing tens of millions of dollars—with plans for hundreds of millions more—to reorganize America’s antiquated energy infrastructure in the image of the internet: decentralized, distributed, disembodied. “If you do this right,” says Schmidt, “it sure sounds a lot like the internet: a set of cooperating networks where the traffic and power flow, where people can connect with anything they want. They can be consumers as well as producers. The internet created a tremendous amount of wealth for America, and I think we can do it here too.”
Google’s higher calling comes directly from co-founders Sergey Brin and Larry Page, both ardent environmentalists. But it’s made real by the same moxie that has driven Google to create digital editions of 7 million books, with scant concern for copyright issues, and to amass satellite images of almost all of the earth’s nooks and crannies. “We only hire people who really, genuinely believe that big change is possible and the right thing to do,” says Erik Teetzel, a 34-year-old Google engineer who heads a team of researchers looking for ways to produce cheap renewable energy.
Still, now doesn’t seem like an ideal time for Google to be making such an ambitious move. Oil prices are down, eradicating much of the demand for alternatives to fossil fuels. A global economic downturn means many companies now consider green technology a luxury they can’t afford. Spending on green projects is being delayed while companies wait for the economic storm to pass. Even Google itself has had better days. Its internet-advertising franchise is under more strain than ever, and its stock price, once stratospheric, is down about 50 percent over the past year.
The truth is, Google has never been very successful at diversifying its business; 97 percent of its revenue still comes from online ads. Yet prospecting for new opportunities, at great expense and effort, remains as much a part of company lore as free gourmet meals. “Nine years ago, people said, ‘How can you charge people to do searches on the internet,’?” says Teetzel.“Larry and Sergey said if you solve the big problem, you can figure out how to make money off it. The same idea applies to energy. If we solve the big problems, we’re going to figure out how to make money.”
Don’t put it past them, says Immelt, who signed an agreement at Zeitgeist to collaborate with Google on technology development and to jointly lobby Washington for green-energy projects. “I’ve never seen a company in my career do as many things well as quickly as Google has done,” he says.
Google’s goal in energy is twofold: First, it wants to make your home energy-smart, so that appliances know when to power up and power down, and heating and cooling systems respond automatically to changes in the price of energy. The company views this as essentially a software problem, akin to making sense of the torrent of information on the Web. But before Google can transform your home, it’s pushing for a revolution in the way energy is produced.
The system of making and distributing electricity in the U.S. is a century old and miserably outmoded. Electricity is sent to factories and homes from large, central power stations, often built far from big cities because of the pollution factor. The old power lines leak like sieves; between 5 and 7 percent of U.S. electricity is lost through the nation’s 200,000 miles of high-voltage wires. But erecting new transmission lines is political drudgery, requiring cooperation across multiple local, state, and federal jurisdictions, any one of which can stall a project for years.
In addition, the utilities have delayed expanding and upgrading existing power plants because doing so would require them to install state-of-the-art pollution controls that they contend are too expensive. Consequently, the U.S. grid has stagnated, with the capacity for generating power growing four times faster than the capacity to transmit it. Bill Richardson, the governor of New Mexico and a former energy secretary, calls the grid “third world.”
The decrepit system is a serious impediment to renewable-energy projects on a grand scale. To move wind and solar power to consumers from the breezy Great Plains and the sunbaked deserts of the Southwest, the U.S.needs about 20,000 miles of new transmission lines. It also needs a massive upgrade of the analog grid that directs the energy from place to place, with new computers, sensors, and communication gear to manage the network.
Unlike nuclear reactors and most fossil-fuel-burning plants, windmills and solar cells produce electricity only when the wind blows or the sun shines. An automated grid is crucial for managing the constantly fluctuating power from renewable sources.
The world as envisioned by Google includes a vast computer network that monitors and controls the nation’s electricity grid and sets prices for power based on real-time supply and demand. For example, the system could, on a particularly hot afternoon, send a signal to millions of utility customers warning that power prices are soaring. The information could be fed directly into an energy-management system linked wirelessly to people’s air conditioners and appliances, and their Jacuzzis, garden lights, and electric cars.After being programmed, the system would automatically shut down designated devices if prices hit preset levels, just as program trading automatically buys and sells stocks. For those without automatic systems, it would take just a few keystrokes from a computer at the office to power down selected machines at home and avoid being walloped by the price spike.
The grid itself would work in similar ways. If it faced shortages, it could send out a signal offering to buy back power stored in people’s electric car batteries for a healthy premium above what the same electrons cost just 15 hours earlier. Those interested would click accept on their computer screens. The network would locate their vehicles and automatically activate decharging.Eventually, demand and prices would drop, triggering dishwashers and clothes dryers to switch on. Electric cars would resume charging.
The wind blowing through the streets of Manhattan couldn’t power the city, but wind machines placed thousands of feet above the city theoretically could.
The first rigorous, worldwide study of high-altitude wind power estimates that there is enough wind energy at altitudes of about 1,600 to 40,000 feet to meet global electricity demand a hundred times over.
The very best ground-based wind sites have a wind-power density of less than 1 kilowatt per square meter of area swept. Up near the jet stream above New York, the wind power density can reach 16 kilowatts per square meter. The air up there is a vast potential reservoir of energy, if its intermittency can be overcome.
Even better, the best high-altitude wind-power resources match up with highly populated areas including North America’s Eastern Seaboard and China’s coastline.
“The resource is really, really phenomenal,” said Christine Archer of Cal State University-Chico, who co-authored a paper on the work published in the open-access journal Energies.”There is a lot of energy up there, but it’s not as steady as we thought. It’s not going to be the silver bullet that will solve all of our energy problems, but it will have a role.”
For centuries, we’ve been using high-density fossil fuels, but peaking oil supplies and climate concerns have given new life to green technologies. Unfortunately, renewable energy is generally diffuse, meaning you need to cover a lot of area to get the energy you want. So engineers look for renewable resources that are as dense as possible. On that score, high-altitude wind looks very promising.
“We might extend the application of [wind] power to the heights of the clouds, by means of kites.”utopian technologist John Etzler, 1833
Wind’s power — energy which can be used to do work like spinning magnets to generate electricity — varies with the cube of its speed. So, a small increase in wind speed can lead to a big increase in the amount of mechanical energy you can harvest. High-altitude wind blows fast, is spread nicely across the globe, and is easier to predict than terrestrial wind.
These properties have led inventors and scientists to cast their hopes upward, where strong winds have long been known to blow, as Etzler’s dreamy quote shows. During the energy shocks of the 1970s, when new energy ideas of all kinds were bursting forth, engineers and schemers patented several designs for harnessing wind thousands of feet in the air.
The two main design frameworks they came up with are still with us today. The first is essentially a power plant in the sky, generating electricity aloft and sending it down to Earth via a conductive tether. The second is more like a kite, transmitting mechanical energy to the ground, where generators turn it into electricity. Theoretically, both approaches could work, but nothing approaching a rigorous evaluation of the technologies has been conducted.
The Department of Energy had a very small high-altitude wind program, which produced some of the first good data about the qualities of the wind up there, but it got axed as energy prices dropped in the 1980s and Reagan-era DOE officials directed funds elsewhere.
The program hasn’t been restarted, despite growing attention to renewables, but that’s not because it’s considered a bad idea. Rather, it is seen as just a little too far out on the horizon.
“We’re very much aimed these days at things that we can fairly quickly commercialize, like in the next 10 years or so,” said National Renewable Energy Laboratory spokesperson George Douglas.
Startups like KiteGen, Sky Windpower, Magenn, and Makani (Google’s secretive fundee) have come into the space over the last several years, and they seem to be working on much shorter timelines.
“We are not that far from working prototypes,” Archer said, though she noted that the companies are all incredibly secretive about the data from their testing.
Magenn CFO Barry Monette said he expects “first revenue” next year when they sell “two to four” working prototypes of their blimpy machine, which will operate at much lower altitudes.
“We do think that we’re going to be first [to market], unless something happens,” Monette said.
In the long term, trying to power entire cities with machines like this would be difficult, largely because even in the best locations, the wind will fail at least 5 percent of the time.
“This means that you either need backup power, massive amounts of energy storage, or a continental- or even global-scale electricity grid to assure power availability,” said co-author Ken Caldeira, an ecologist at Stanford University. “So, while high-altitude wind may ultimately prove to be a major energy source, it requires substantial infrastructure.”
lunes, 15 de junio de 2009
A bullock team shoulders teak logs, weighing as much as 4 tons (3,629 kilograms) each, onto a cart in Mandalay, Myanmar (Burma). Many governments have banned Burmese teak, but the country still supplies an estimated 75 to 80 percent of the world's teak. Slash-and-burn harvesting threatens to wipe out forests there.
A clearing in Gunung Palung National Park in Indonesian Borneo reveals an illegal logging operation, where loggers fell and mill trees into lumber on site. Deforestation in this area threatens the endangered orangutan population.
Token trees dot Brazil's Pantanal wetland where dense forest used to stand. Considered the world's largest wetland, the Pantanal is an ecological paradise that covers 54,000 square miles (140,000 square kilometers) in Brazil, Bolivia, and Paraguay, and supports thousands of animal species.
Guppies are small fresh-water fish that biologists have studied for long. (Credit: Paul Bentzen)
How fast can evolution take place? In just a few years, according to a new study on guppies led by UC Riverside's Swanne Gordon, a graduate student in biology.
Gordon and her colleagues studied guppies — small fresh-water fish biologists have studied for long — from the Yarra River, Trinidad. They introduced the guppies into the nearby Damier River, in a section above a barrier waterfall that excluded all predators. The guppies and their descendents also colonized the lower portion of the stream, below the barrier waterfall, that contained natural predators.
Eight years later (less than 30 guppy generations), the researchers found that the guppies in the low-predation environment above the barrier waterfall had adapted to their new environment by producing larger and fewer offspring with each reproductive cycle. No such adaptation was seen in the guppies that colonized the high-predation environment below the barrier waterfall.
"High-predation females invest more resources into current reproduction because a high rate of mortality, driven by predators, means these females may not get another chance to reproduce," explained Gordon, who works in the lab of David Reznick, a professor of biology. "Low-predation females, on the other hand, produce larger embryos because the larger babies are more competitive in the resource-limited environments typical of low-predation sites. Moreover, low-predation females produce fewer embryos not only because they have larger embryos but also because they invest fewer resources in current reproduction."
Natural guppy populations can be divided into two basic types. High-predation populations are usually found in the downstream reaches of rivers, where they coexist with predatory fishes that have strong effects on guppy demographics. Low-predation populations are typically found in upstream tributaries above barrier waterfalls, where strong predatory fishes are absent. Researchers have found that this broad contrast in predation regime has driven the evolution of many adaptive differences between the two guppy types in color, morphology, behavior, and life history.
Gordon's research team performed a second experiment to measure how well adapted to survival the new population of guppies were. To this end, they introduced two new sets of guppies, one from a portion of the Yarra River that contained predators and one from a predator-free tributary to the Yarra River into the high-and low-predation environments in the Damier River.
They found that the resident, locally adapted guppies were significantly more likely to survive a four-week time period than the guppies from the two sites on the Yarra River. This was especially true for juveniles. The adapted population of juveniles showed a 54-59 percent increase in survival rate compared to their counterparts from the newly introduced group.
"This shows that adaptive change can improve survival rates after fewer than ten years in a new environment," Gordon said. "It shows, too, that evolution might sometimes influence population dynamics in the face of environmental change."
She was joined in the study by Reznick and Michael Bryant of UCR; Michael Kinnison and Dylan Weese of the University of Maine, Orono; Katja Räsänen of the Swiss Federal Institute of Technology, Zurich, and the Swiss Federal Institute of Aquatic Science and Technology, Dübendorf; and Nathan Miller and Andrew Hendry of McGill University, Canada.
Study results appear in the July issue of The American Naturalist.
Financial support for the study was provided by the National Science Foundation, the Natural Sciences and Engineering Research Council of Canada, the Le Fonds Québécois de la Recherche sur la Nature et les Technologies, the Swedish Research Council, the Maine Agricultural and Forestry Experiment Station, and McGill University.
At times in the distant past, an abrupt change in climate has been associated with a shift of seasonal monsoons to the south, a new study concludes, causing more rain to fall over the oceans than in the Earth's tropical regions, and leading to a dramatic drop in global vegetation growth.
If similar changes were to happen to the Earth's climate today as a result of global warming – as scientists believe is possible - this might lead to drier tropics, more wildfires and declines in agricultural production in some of the world's most heavily populated regions.
The findings were based on oxygen isotopes in air from ice cores, and supported by previously published data from ancient stalagmites found in caves. They will be published Friday in the journal Science by researchers from Oregon State University, the Scripps Institution of Oceanography and the Desert Research Institute in Nevada. The research was supported by the National Science Foundation.
The data confirming these effects were unusually compelling, researchers said.
"Changes of this type have been theorized in climate models, but we've never before had detailed and precise data showing such a widespread impact of abrupt climate change," said Ed Brook, an OSU professor of geosciences. "We didn't really expect to find such large, fast environmental changes recorded by the whole atmosphere. The data are pretty hard to ignore."
The researchers used oxygen measurements, as recorded in air bubbles in ice cores from Antarctica and Greenland, to gauge the changes taking place in vegetation during the past 100,000 years. Increases or decreases in vegetation growth can be determined by measuring the ratio of two different oxygen isotopes in air.
They were also able to verify and confirm these measurements with data from studies of ancient stalagmites on the floors of caves in China, which can reveal rainfall levels over hundreds of thousands of years.
"Both the ice core data and the stalagmites in the caves gave us the same signal, of very dry conditions over broad areas at the same time," Brook said. "We believe the mechanism causing this was a shift in monsoon patterns, more rain falling over the ocean instead of the land. That resulted in much lower vegetation growth in the regions affected by these monsoons, in what is now India, Southeast Asia and parts of North Africa."
Previous research has determined that the climate can shift quite rapidly in some cases, in periods as short as decades or less. This study provides a barometer of how those climate changes can affect the Earth's capacity to grow vegetation.
"Oxygen levels and its isotopic composition in the atmosphere are pretty stable, it takes a major terrestrial change to affect it very much," Brook said. "These changes were huge. The drop in vegetation growth must have been dramatic."
Observations of past climatic behavior are important, Brook said, but not a perfect predictor of the impact of future climatic shifts. For one thing, at times in the past when some of these changes took place, larger parts of the northern hemisphere were covered by ice. Ocean circulation patterns also can heavily influence climate, and shift in ways that are not completely understood.
However, the study still points to monsoon behavior being closely linked to climate change.
"These findings highlight the sensitivity of low-latitude rainfall patterns to abrupt climate change in the high–latitude north," the researchers wrote in their report, "with possible relevance for future rainfall and agriculture in heavily-populated monsoon regions."
Life May Extend Planet's 'Life': Billion-year Life Extension For Earth Also Doubles Odds Of Finding Life On Other Planets
Roughly a billion years from now, the ever-increasing radiation from the sun will have heated Earth into inhabitability; the carbon dioxide in the atmosphere that serves as food for plant life will disappear, pulled out by the weathering of rocks; the oceans will evaporate; and all living things will disappear. Or maybe not quite so soon, researchers now say. (Credit: iStockphoto)
Roughly a billion years from now, the ever-increasing radiation from the sun will have heated Earth into inhabitability; the carbon dioxide in the atmosphere that serves as food for plant life will disappear, pulled out by the weathering of rocks; the oceans will evaporate; and all living things will disappear.
Or maybe not quite so soon, say researchers from the California Institute of Technology (Caltech), who have come up with a mechanism that doubles the future lifespan of the biosphere—while also increasing the chance that advanced life will be found elsewhere in the universe.
A paper describing their hypothesis was published June 1 in the early online edition of the Proceedings of the National Academy of Sciences (PNAS).
Earth maintains its surface temperatures through the greenhouse effect. Although the planet's greenhouse gases—chiefly water vapor, carbon dioxide, and methane—have become the villain in global warming scenarios, they're crucial for a habitable world, because they act as an insulating blanket in the atmosphere that absorbs and radiates thermal radiation, keeping the surface comfortably warm.
As the sun has matured over the past 4.5 billion years, it has become both brighter and hotter, increasing the amount of solar radiation received by Earth, along with surface temperatures. Earth has coped by reducing the amount of carbon dioxide in the atmosphere, thus reducing the warming effect. (Despite current concerns about rising carbon dioxide levels triggering detrimental climate change, the pressure of carbon dioxide in the atmosphere has dropped some 2,000-fold over the past 3.5 billion years; modern, man-made increases in atmospheric carbon dioxide offset a fraction of this overall decrease.)
The problem, says Joseph L. Kirschvink, the Nico and Marilyn Van Wingen Professor of Geobiology at Caltech and a coauthor of the PNAS paper, is that "we're nearing the point where there's not enough carbon dioxide left to regulate temperatures following the same procedures."
Kirschvink and his collaborators Yuk L. Yung, a Caltech professor of planetary science, and graduate students King-Fai Li and Kaveh Pahlevan, say that the solution is to reduce substantially the total pressure of the atmosphere itself, by removing massive amounts of molecular nitrogen, the largely nonreactive gas that makes up about 78 percent of the atmosphere. This would regulate the surface temperatures and allow carbon dioxide to remain in the atmosphere, to support life, and could tack an additional 1.3 billion years onto Earth's expected lifespan.
In the "blanket" analogy for greenhouse gases, carbon dioxide would be represented by the cotton fibers making up the blanket. "The cotton weave may have holes, which allow heat to leak out," explains Li, the lead author of the paper.
"The size of the holes is controlled by pressure," Yung says. "Squeeze the blanket," by increasing the atmospheric pressure, "and the holes become smaller, so less heat can escape. With less pressure, the holes become larger, and more heat can escape," he says, helping the planet to shed the extra heat generated by a more luminous sun.
Strikingly, no external influence would be necessary to take nitrogen out of the air, the scientists say. Instead, the biosphere itself would accomplish this, because nitrogen is incorporated into the cells of organisms as they grow, and is buried with them when they die.
In fact, "This reduction of nitrogen is something that may already be happening," says Pahlevan, and that has occurred over the course of Earth's history. This suggests that Earth's atmospheric pressure may be lower now than it was earlier in the planet's history.
Proof of this hypothesis may come from other research groups that are examining the gas bubbles formed in ancient lavas to determine past atmospheric pressure: the maximum size of a forming bubble is constrained by the amount of atmospheric pressure, with higher pressures producing smaller bubbles, and vice versa.
If true, the mechanism also would potentially occur on any extrasolar planet with an atmosphere and a biosphere.
"Hopefully, in the future we will not only detect Earth-like planets around other stars but learn something about their atmospheres and the ambient pressures," Pahlevan says. "And if it turns out that older planets tend to have thinner atmospheres, it would be an indication that this process has some universality."
Adds Yung: "We can't wait for the experiment to occur on Earth. It would take too long. But if we study exoplanets, maybe we will see it. Maybe the experiment has already been done."
Increasing the lifespan of our biosphere—from roughly 1 billion to 2.3 billion years—has intriguing implications for the search for life elsewhere in the universe. The length of the existence of advanced life is a variable in the Drake equation, astronomer Frank Drake's famous formula for estimating the number of intelligent extraterrestrial civilizations in the galaxy. Doubling the duration of Earth's biosphere effectively doubles the odds that intelligent life will be found elsewhere in the galaxy.
"It didn't take very long to produce life on the planet, but it takes a very long time to develop advanced life," says Yung. On Earth, this process took four billion years. "Adding an additional billion years gives us more time to develop, and more time to encounter advanced civilizations, whose own existence might be prolonged by this mechanism. It gives us a chance to meet."
Trapped more than three kilometers under glacial ice in Greenland for over 120,000 years, a dormant bacterium -- Herminiimonas glaciei -- has been coaxed back to life by researchers. (Credit: Image courtesy of Society for General Microbiology)
A novel bacterium -- trapped more than three kilometres under glacial ice in Greenland for over 120,000 years -- may hold clues as to what life forms might exist on other planets.
Dr Jennifer Loveland-Curtze and a team of scientists from Pennsylvania State University report finding the novel microbe, which they have called Herminiimonas glaciei, in the current issue of the International Journal of Systematic and Evolutionary Microbiology. The team showed great patience in coaxing the dormant microbe back to life; first incubating their samples at 2˚C for seven months and then at 5˚C for a further four and a half months, after which colonies of very small purple-brown bacteria were seen.
H. glaciei is small even by bacterial standards – it is 10 to 50 times smaller than E. coli. Its small size probably helped it to survive in the liquid veins among ice crystals and the thin liquid film on their surfaces. Small cell size is considered to be advantageous for more efficient nutrient uptake, protection against predators and occupation of micro-niches and it has been shown that ultramicrobacteria are dominant in many soil and marine environments.
Most life on our planet has always consisted of microorganisms, so it is reasonable to consider that this might be true on other planets as well. Studying microorganisms living under extreme conditions on Earth may provide insight into what sorts of life forms could survive elsewhere in the solar system.
"These extremely cold environments are the best analogues of possible extraterrestrial habitats", said Dr Loveland-Curtze, "The exceptionally low temperatures can preserve cells and nucleic acids for even millions of years. H. glaciei is one of just a handful of officially described ultra-small species and the only one so far from the Greenland ice sheet; studying these bacteria can provide insights into how cells can survive and even grow under extremely harsh conditions, such as temperatures down to -56˚C, little oxygen, low nutrients, high pressure and limited space."
"H. glaciei isn't a pathogen and is not harmful to humans", Dr Loveland-Curtze added, "but it can pass through a 0.2 micron filter, which is the filter pore size commonly used in sterilization of fluids in laboratories and hospitals. If there are other ultra-small bacteria that are pathogens, then they could be present in solutions presumed to be sterile. In a clear solution very tiny cells might grow but not create the density sufficient to make the solution cloudy."
543 to 490 Million Years Ago
The Cambrian Period marks an important point in the history of life on earth; it is the time when most of the major groups of animals first appear in the fossil record. This event is sometimes called the "Cambrian Explosion", because of the relatively short time over which this diversity of forms appears. It was once thought that the Cambrian rocks contained the first and oldest fossil animals, but these are now to be found in the earlier Vendian strata.
The Cambrian period, part of the Paleozoic era, produced the most intense burst of evolution ever known. The Cambrian Explosion saw an incredible diversity of life emerge, including many major animal groups alive today. Among them were the chordates, to which vertebrates (animals with backbones) such as humans belong.
What sparked this biological bonanza isn't clear. It may be that oxygen in the atmosphere, thanks to emissions from photosynthesizing cyanobacteria and algae, were at levels needed to fuel the growth of more complex body structures and ways of living. The environment also became more hospitable, with a warming climate and rising sea levels flooding low-lying landmasses to create shallow, marine habitats ideal for spawning new life-forms.
Reconstructions of Cambrian geography contain relatively large sources of error. They suggest that a global supercontinent, Pannotia, was in the process of breaking up, with Laurentia (North America) and Siberia having separated from the main mass of the Gondwana supercontinent to form isolated landmasses. Most continental land mass was clustered in the southern hemisphere.
With a lack of sea ice – the great glaciers of the Marinoan Snowball Earth were long melted – the sea level was high, which led to large areas of the continents being flooded in warm, shallow seas ideal for thriving life. The sea levels fluctuated somewhat, suggesting that there were 'ice ages', associated with pulses of expansion and contraction of a south polar ice cap.
One species was so plentiful and had such great numbers and so many species that it is sometimes called the ruling species of the period. This species was the
The trilobite was an arthropod with a tough outer skin. It got its name from the three lobes in the hard skin. The trilobite was also one of the first animals to have eyesight. During the Cambrian there were more than 100 types of trilobites.
Cambrian Period PikiaOther Invertebrates
There were plenty of other species living during the Cambrian Period also. Mollusks, worms, sponges and echinoderms filled the Cambrian seas.
No Backbones Yet, But...
There was even an early type of chordate living during the Cambrian Period. It was the Pikaia. Pikaia looked a bit like a worm with a long fin on each side of its body. The nerve cord was visible as a ridge starting behind the head area and extending almost to the tip of the body.
The Top of The Food Chain
One of the most fearsome hunters in the Cambrian seas was the Anomalocaris.Cambrian Peiod anomalocaris This animal had an exoskeleton like an arthropod, but it did not have the jointed legs that would make it a true arthropod. This large animal fed on trilobites and other arthropods, worms and mollusks.
Sponges grew in Cambrian seas, too. These animals belong to the phylum porifera because of all the tiny pores in their bodies. One species of sponge from this period had many branches that made it look like a tree. Another type of sponge looked like an ice cream cone-without the ice cream, of course! Many of the sponges became extinct when temperatures dropped at the end of the Cambrian period.
Many of the creatures living in the Cambrian seas developed hard structures for defense, hard shells, scales, and spikes covering the outside of the body. The Wiwaxia lived on the bottom of the sea. The dorsal side of its body had scales and spikes for protection. The underside of Wiwaxia was soft and unprotected. Trilobites also living on the bottom could burrow under the Wiwaxia and attack the defenseless belly.
Hallucigenia stood on seven pairs of tall legs. Its long, tube-shaped body had two rows of tall spikes along its back. This type of protection would have been very important for the animal because it had no eyesight to warn it of dangers.
The plants of the Cambrian were mostly simple, one-celled algae. The single cells often grew together to form large colonies. The colonies looked like one large plant.
The Cambrian Period began with an explosion of life forms. It ended in a mass extinction. Advancing glaciers would have lowered the temperature of the shallow seas where so many species lived. Changes in the temperature and the amount of oxygen in the water would have meant the end for any species that could not adapt.
The enemy of our enemy may be our new partner in stopping a global health crisis.
With World Malaria Day (April 25) around the corner, new discoveries suggest our greatest allies in the fight against malaria may be the mosquitoes themselves.
Although saddled with a lousy public image, mosquitoes have immune systems that actually kill 80 to 90 percent of the malaria parasites that enter the insect's bodies, a new study says.
The discovery is part of an international effort to create a new generation of malaria treatments.
Genetically modified, malaria-fighting mosquitoes or even antibodies injected into humans and "fed" back to mosquitoes could someday be more effective at slowing the disease than today's simple mosquito nets, researchers say.
Triple Threat to Malaria
Malaria-parasite populations are lower when the parasites are inside mosquitoes, so some experts think it may be more effective to attack malaria inside the insects—before it enters human hosts.
Understanding how the mosquito immune system fends off malaria is an important part of bringing such a plan to fruition.
Now researchers say they've worked out the mechanism that drives one of the mosquitoes' defenses.
Three proteins in mosquito blood form a complex that "binds to a malaria parasite and punches holes through its membrane," destroying the layer that protects the parasite and holds all its important parts together, said Imperial College London biologist George Christophides, who co-authored the report, published in the April 10 issue of the journal Science.
Previously researchers had identified the three proteins and noticed that one of them seemed similar to microbe-killing proteins in other animals and in humansBut nobody had put the three together," Christophides said. "Now we can show that there's a pathway in effect."
How It Might Work
A mosquito-up approach to malaria control is feasible in the long term, researchers say. There are a couple of ways it could work.
In one scenario, scientists could create genetically modified mosquitoes, granting their immune systems pumped-up malaria-killing abilities.
The key would be to find a genetic drive mechanism—some factor that would give the new, malaria-fighting genes a selective advantage and help them spread quickly through wild mosquito populations via breeding, said Gregory Lanzaro, director of the Vector Genetics Lab at the University of California, Davis.
No one has figured this out for mosquitoes yet. But the U.S. Centers for Disease Control and Prevention is already testing a similar concept in blood-sucking assassin bugs as a way to stop the spread of deadly, difficult-to-cure Chagas disease, Lanzaro said.
The other option would be to develop antibodies that can fight the parasites' early, mosquito-dwelling forms—and "feed" the antibodies to the insects via human blood.
Mosquito immune systems don't produce antibodies on their own. And by the time the parasites reach humans, they have matured and found ways to hide out from human antibodies, said Marcelo Jacobs-Lorena, professor of molecular microbiology and immunology at the Johns Hopkins Malaria Research Institute in Baltimore.
But if we vaccinate humans with antibodies that target mosquito-stage malaria, those antibodies could be passed on to the mosquitoes when they feed on treated human blood, Jacobs-Lorena said.
Combined with a second, protective vaccine, this could be a real possibility, he said.
"There's a partially effective vaccine that protects humans that's being tested," Jacobs-Lorena said. "Neither it, nor the transmission-blocking vaccine would be 100 percent effective, but the combination may work."
No Quick Fix
Getting to these paradigm-shifting malaria prevention techniques isn't likely to be quick or easy.
The three-protein mechanism isn't the only factor involved in mosquitoes' malaria-fighting powers. And the new study might not provide the full picture.
The work was done using a model parasite—a version of malaria adapted to rodents, rather than humans—and laboratory mosquitoes, which are often genetically different from their wild cousins.
Studies done this way haven't always reflected what happens in nature, the University of California's Lanzaro said. Johns Hopkins's Jacobs-Lorena agreed.
"Research in recent years has shown that mosquitoes react differently to a parasite they aren't used to seeing, as opposed to the ones they've co-adapted with in the field," Jacobs-Lorena said.
The new discovery, he added, "is an important finding, but it must be validated with human parasites."