The research aims to produce valid information and must use reliable instruments that guarantee accurate and make it quantifiable and possible reproducibility. Allowing the exclusion or at least control prejudice of personal insights and trends that may distort the results.
Nanoengineers at the University of California, San Diego have developed a novel technology that can fabricate, in mere seconds, microscale three dimensional (3D) structures out of soft, biocompatible hydrogels. Near term, the technology could lead to better systems for growing and studying cells, including stem cells, in the laboratory. Long-term, the goal is to be able to print biological tissues for regenerative medicine. For example, in the future, doctors may repair the damage caused by heart attack by replacing it with tissue that rolled off of a printer.
Reported in the journal Advanced Materials, the biofabrication technology, called dynamic optical projection stereolithography (DOPsL), was developed in the laboratory of NanoEngineering Professor Shaochen Chen. Current fabrication techniques, such as photolithography and micro-contact printing, are limited to generating simple geometries or 2D patterns. Stereolithography is best known for its ability to print large objects such as tools and car parts. The difference, says Chen, is in the micro- and nanoscale resolution required to print tissues that mimic nature's fine-grained details, including blood vessels, which are essential for distributing nutrients and oxygen throughout the body. Without the ability to print vasculature, an engineered liver or kidney, for example, is useless in regenerative medicine. With DOPsL, Chen's team was able to achieve more complex geometries common in nature such as flowers, spirals and hemispheres. Other current 3D fabrication techniques, such as two-photon photopolymerization, can take hours to fabricate a 3D part.
The biofabrication technique uses a computer projection system and precisely controlled micromirrors to shine light on a selected area of a solution containing photo-sensitive biopolymers and cells. This photo-induced solidification process forms one layer of solid structure at a time, but in a continuous fashion. The technology is part of a new biofabrication technology that Chen is developing under a four-year, $1.5 million grant from the National Institutes of Health (R01EB012597). The Obama administration in March launched a $1 billion investment in advanced manufacturing technologies, including creating the National Additive Manufacturing Innovation Institute with $30 million in federal funding to focus on 3D printing. The term "additive manufacturing" refers to the way 3D structures are built layering very thin materials.
The Chen Research Group is focused on fabrication of nanostructured biomaterials and nanophotonics for biomedical engineering applications and recently moved into the new Structural and Materials Engineering Building, which is bringing nano and structural engineers, medical device labs and visual artists into a collaborative environment under one roof.
First they took over chess. Then Jeopardy. Soon, computers could make the ideal partner in a game of Draw Something (or its forebear, Pictionary).
Application software Graphical user interface
Researchers from Brown University and the Technical University of Berlin have developed a computer program that can recognize sketches as they're drawn in real time. It's the first computer application that enables "semantic understanding" of abstract sketches, the researchers say. The advance could clear the way for vastly improved sketch-based interface and search applications.
The research behind the program was presented last month at SIGGRAPH, the world's premier computer graphics conference. The paper is now available online (http://cybertron.cg.tu-berlin.de/eitz/projects/classifysketch/), together with a video, a library of sample sketches, and other materials.
Computers are already pretty good at matching sketches to objects as long as the sketches are accurate representations. For example, applications have been developed that can match police sketches to actual faces in mug shots. But iconic or abstract sketches -- the kind that most people are able to easily produce -- are another matter entirely.
For example, if you were asked to sketch a rabbit, you might draw a cartoony-looking thing with big ears, buckteeth, and a cotton tail. Another person probably wouldn't have much trouble recognizing your funny bunny as a rabbit -- despite the fact that it doesn't look all that much like a real rabbit.
"It might be that we only recognize it as a rabbit because we all grew up that way," said James Hays, assistant professor of computer science at Brown, who developed the new program with Mathias Eitz and Marc Alexa from the Technical University in Berlin. "Whoever got the ball rolling on caricaturing rabbits like that, that's just how we all draw them now."
Getting a computer to understand what we've come to understand through years of cartoons and coloring books is a monumentally difficult task. The key to making this new program work, Hays says, is a large database of sketches that could be used to teach a computer how humans sketch objects. "This is really the first time anybody has examined a large database of actual sketches," Hays said.
To put the database together, the researchers first came up with a list of everyday objects that people might be inclined to sketch. "We looked at an existing computer vision dataset called LabelMe, which has a lot of annotated photographs," Hays said. "We looked at the label frequency and we got the most popular objects in photographs. Then we added other things of interest that we thought might occur in sketches, like rainbows for example."
They ended up with a set of 250 object categories. Then the researchers used Mechanical Turk, a crowdsourcing marketplace run by Amazon, to hire people to sketch objects from each category -- 20,000 sketches in all. Those data were then fed into existing recognition and machine learning algorithms to teach the program which sketches belong to which categories. From there, the team developed an interface where users input new sketches, and the computer tries to identify them in real time, as quickly as the user draws them.
As it is now, the program successfully identifies sketches with around 56-percent accuracy, as long as the object is included in one of the 250 categories. That's not bad, considering that when the researchers asked actual humans to identify sketches in the database, they managed about 73-percent accuracy. "The gap between human and computational performance is not so big, not as big certainly as it is in other computer vision problems," Hays said.
The program isn't ready to rule Pictionary just yet, mainly because of its limited 250-category vocabulary. But expanding it to include more categories is a possibility, Hays says. One way to do that might be to turn the program into a game and collect the data that players input. The team has already made a free iPhone/iPad app that could be gamified.
"The game could ask you to sketch something and if another person is able to successfully recognize it, then we can say that must have been a decent enough sketch," he said. "You could collect all sorts of training data that way."
And that kind of crowdsourced data has been key to the project so far.
"It was the data gathering that had been holding this back, not the digital representation or the machine learning; those have been around for a decade," Hays said. "There's just no way to learn to recognize say, sketches of lions, based on just a clever algorithm. The algorithm really needs to see close to 100 instances of how people draw lions, and then it becomes possible to tell lions from potted plants."
Ultimately a program like this one could end up being much more than just fun and games. It could be used to develop better sketch-based interface and search applications. Despite the ubiquity of touch screens, sketch-based search still isn't widely used, but that's probably because it simply hasn't worked very well, Hays says.
A better sketch-based interface might improve computer accessibility. "Directly searching for some visual shape is probably easier in some domains," Hays said. "It avoids all language issues; that's certainly one thing."
Share this story on Facebook, Twitter, and Google:
NASA's long-lived rover Opportunity has returned an image of the Martian surface that is puzzling researchers.
Exploration of Mars
Spherical objects concentrated at an outcrop Opportunity reached last week differ in several ways from iron-rich spherules nicknamed "blueberries" the rover found at its landing site in early 2004 and at many other locations to date.
Opportunity is investigating an outcrop called Kirkwood in the Cape York segment of the western rim of Endeavour Crater. The spheres measure as much as one-eighth of an inch (3 millimeters) in diameter. The analysis is still preliminary, but it indicates that these spheres do not have the high iron content of Martian blueberries.
"This is one of the most extraordinary pictures from the whole mission," said Opportunity's principal investigator, Steve Squyres of Cornell University in Ithaca, N.Y. "Kirkwood is chock full of a dense accumulation of these small spherical objects. Of course, we immediately thought of the blueberries, but this is something different. We never have seen such a dense accumulation of spherules in a rock outcrop on Mars."
The Martian blueberries found elsewhere by Opportunity are concretions formed by action of mineral-laden water inside rocks, evidence of a wet environment on early Mars. Concretions result when minerals precipitate out of water to become hard masses inside sedimentary rocks. Many of the Kirkwood spheres are broken and eroded by the wind. Where wind has partially etched them away, a concentric structure is evident.
Opportunity used the microscopic imager its arm to look closely at Kirkwood. Researchers checked the spheres' composition by using an instrument called the Alpha Particle X-Ray Spectrometer on Opportunity's arm.
"They seem to be crunchy on the outside, and softer in the middle," Squyres said. "They are different in concentration. They are different in structure. They are different in composition. They are different in distribution. So, we have a wonderful geological puzzle in front of us. We have multiple working hypotheses, and we have no favorite hypothesis at this time. It's going to take a while to work this out, so the thing to do now is keep an open mind and let the rocks do the talking."
Just past Kirkwood lies another science target area for Opportunity. The location is an extensive pale-toned outcrop in an area of Cape York where observations from orbit have detected signs of clay minerals. That may be the rover's next study site after Kirkwood. Four years ago, Opportunity departed Victoria Crater, which it had investigated for two years, to reach different types of geological evidence at the rim of the much larger Endeavour Crater.
The rover's energy levels are favorable for the investigations. Spring equinox comes this month to Mars' southern hemisphere, so the amount of sunshine for solar power will continue increasing for months.
"The rover is in very good health considering its 8-1/2 years of hard work on the surface of Mars," said Mars Exploration Rover Project Manager John Callas of NASA's Jet Propulsion Laboratory in Pasadena, Calif. "Energy production levels are comparable to what they were a full Martian year ago, and we are looking forward to productive spring and summer seasons of exploration."
NASA launched the Mars rovers Spirit and Opportunity in the summer of 2003, and both completed their three-month prime missions in April 2004. They continued bonus, extended missions for years. Spirit finished communicating with Earth in March 2010. The rovers have made important discoveries about wet environments on ancient Mars that may have been favorable for supporting microbial life.
JPL manages the Mars Exploration Rover Project for NASA's Science Mission Directorate in Washington.
To view the image of the area, visit: http://www.nasa.gov/mission_pages/mer/multimedia/pia16139.html
IBM scientists have been able to differentiate the chemical bonds in individual molecules for the first time using a technique known as noncontact atomic force microscopy (AFM).
Scanning tunneling microscope
The results push the exploration of using molecules and atoms at the smallest scale and could be important for studying graphene devices, which are currently being explored by both industry and academia for applications including high-bandwidth wireless communication and electronic displays.
"We found two different contrast mechanisms to distinguish bonds. The first one is based on small differences in the force measured above the bonds. We expected this kind of contrast but it was a challenge to resolve," said IBM scientist Leo Gross. "The second contrast mechanism really came as a surprise: Bonds appeared with different lengths in AFM measurements. With the help of ab initio calculations we found that the tilting of the carbon monoxide molecule at the tip apex is the cause of this contrast."
As reported in the cover story of the Sept. 14 issue of Science magazine, IBM Research scientists imaged the bond order and length of individual carbon-carbon bonds in C60, also known as a buckyball for its football shape and two planar polycyclic aromatic hydrocarbons (PAHs), which resemble small flakes of graphene. The PAHs were synthesized by Centro de Investigacion en Quimica Bioloxica e Materiais Moleculares (CIQUS) at the Universidade de Santiago de Compostela and Centre National de la Recherche Scientifique (CNRS) in Toulouse.
The individual bonds between carbon atoms in such molecules differ subtly in their length and strength. All the important chemical, electronic, and optical properties of such molecules are related to the differences of bonds in the polyaromatic systems. Now, for the first time, these differences were detected for both individual molecules and bonds. This can increase basic understanding at the level of individual molecules, important for research on novel electronic devices, organic solar cells, and organic light-emitting diodes (OLEDs). In particular, the relaxation of bonds around defects in graphene as well as the changing of bonds in chemical reactions and in excited states could potentially be studied.
As in their earlier research (Science 2009, 325, 1110) the IBM scientists used an atomic force microscope (AFM) with a tip that is terminated with a single carbon monoxide (CO) molecule. This tip oscillates with a tiny amplitude above the sample to measure the forces between the tip and the sample, such as a molecule, to create an image. The CO termination of the tip acts as a powerful magnifying glass to reveal the atomic structure of the molecule, including its bonds. This made it possible to distinguish individual bonds that differ only by 3 picometers or 3 × 10-12 meters, which is about one-hundredth of an atom's diameter.
In previous research the team succeeded in imaging the chemical structure of a molecule, but not the subtle differences of the bonds. Discriminating bond order is close to the current resolution limit of the technique and often other effects obscure the contrast related to bond order. Therefore the scientists had to select and synthesize molecules in which perturbing background effects could be ruled out.
To corroborate the experimental findings and gain further insight into the exact nature of the contrast mechanisms, the team performed first-principles density functional theory calculations. Thereby they calculated the tilting of the CO molecule at the tip apex that occurs during imaging. They found how this tilting yields a magnification and the very sharp images of the bonds.
This research was funded within the framework of several European projects including ARTIST, HERODOT, CEMAS, the Spanish Ministry of Economy and Competitiveness and the Regional Government of Galicia.
Share this story on Facebook, Twitter, and Google: