The research aims to produce valid information and must use reliable instruments that guarantee accurate and make it quantifiable and possible reproducibility. Allowing the exclusion or at least control prejudice of personal insights and trends that may distort the results.
Crocs Uncover
Bizarre Species
lunes, 18 de mayo de 2009
Ethical Guide for Robot Warriors in the Works
Smart missiles, rolling robots, and flying drones currently controlled by humans, are being used on the battlefield more every day. But what happens when humans are taken out of the loop, and robots are left to make decisions, like who to kill or what to bomb, on their own?
Ronald Arkin, a professor of computer science at Georgia Tech, is in the first stages of developing an "ethical governor," a package of software and hardware that tells robots when and what to fire. His book on the subject, "Governing Lethal Behavior in Autonomous Robots," comes out this month.
He argues not only can robots be programmed to behave more ethically on the battlefield, they may actually be able to respond better than human soldiers.
"Ultimately these systems could have more information to make wiser decisions than a human could make," said Arkin. "Some robots are already stronger, faster and smarter than humans. We want to do better than people, to ultimately save more lives."Lethal military robots are currently deployed in Iraq, Afghanistan and Pakistan. Ground-based robots like iRobot's SWORDS or QinetiQ's MAARS robots, are armed with weapons to shoot insurgents, appendages to disarm bombs, and surveillance equipment to search buildings. Flying drones can fire at insurgents on the ground. Patriot missile batteries can detect incoming missiles and send up other missiles to intercept and destroy them.
No matter where the robots are deployed however, there is always a human involved in the decision-making, directing where a robot should fly and what munitions the robot should use if it encounters resistance.
Humans aren't expected to be removed any time soon. Arkin's ethical governor is designed for a more traditional war where civilians have evacuated the war zone and anyone pointing a weapon at U.S. troops can be considered a target.
Arkin's challenge is to translate the 150-plus years of codified, written military law into terms that robots can understand and interpret themselves. In many ways, creating an independent war robot is easier than many other types of artificial intelligence because the laws of war have existed for over 150 years and are clearly stated in numerous treaties.
"We tell soldiers what is right and wrong," said Arkin. "We don't allow soldiers to develop ethics on their own."
One possible scenario for Arkin's ethical governor is an enemy sniper posted in building next to an important cultural setting, like a mosque or cemetery. A wheeled military robot emerges from cover and the sniper fires on it. The robot finds the sniper and has a choice; does it use a grenade launcher or its own sniper rifle to bring down the fighter?
Using geographical data on the surrounding buildings, the robot would decide to use the sniper rifle to minimize any potential damage to the surrounding buildings.
For a human safely removed from combat, the choice of a rifle seems obvious. But a soldier under fire might take extreme action, possibly blowing up the building and damaging the nearby building.
"Robots don't have an inherent right to self-defense and don't get scared," said Arkin. "The robots can take greater risk and respond more appropriately."
Fear might influence human decision-making, but math rules for robots. Simplified, various actions can be classified as ethical or unethical, and assigned a certain value. Starting with a lethal action and subtracting the various ethical responses to the situation equals an unethical response. Other similar equations governor the various possible actions.
The difficult thing is to determine what types of actions go into those equations, and for that humans will be necessary, and ultimately responsible for.
Robots, freed of human masters and capable of lethality "are going to happen," said Arkin. "It's just a question of how much autonomy will be put on them and how fast that happens."
Giving robots specific rules and equations will work in an ideal, civilian-free war, but critics point out such a thing is virtually impossible to find on today's battlefield.
"I challenge you to find a war with no civilians," said Colin Allen, a professor at Indiana University who also coauthored a book on the ethics of military robots.
An approach like Arkin's is easier to program and will appear sooner, but a bottom-up approach, where the robot learns the rules of war itself and makes its own judgment is a far better scenario, according to Allen.
The problem with a bottom-up approach is the the technology doesn't yet exist, and likely won't for another 50 years, says Allen.
Whenever autonomous robots are deployed, humans will still be in the loop, at least legally. If a robot does do something ethically wrong, despite its programming, the software engineer or the builder of the robot will likely be held accountable, says Michael Anderson at Franklin and Marshall University.
Suscribirse a:
Enviar comentarios (Atom)
No hay comentarios:
Publicar un comentario