U.S. Military Tests First “Autonomous Drone”

One oft-quoted example demonstrating the shortcomings of unmanned aircraft systems is the fact that drones can’t take off or land in a normally crowded airport or airspace without remote assistance. A human pilot would be in constant contact with Air Traffic Control, which informs the pilot when he can land or when he can take off. Drones weren’t able to do this on their own, which would have made taking off and landing a sure collision course.

But now, the U.S. Navy is unveiling the first autonomous drone. The Daily Mail reports:

 “The U.S. Navy’s latest unmanned drone has been deployed aboard an aircraft carrier ready to begin tests to see whether it is able to autonomously take off and land at sea. The X-47B stealth drone is the first unmanned aircraft designed to be piloted by artificial intelligence rather than by a remote human operator. In development for five years, the drone is designed to take off, fly a pre-programmed mission then return to base in response to a few mouse clicks from its operator.”

 Soon, military officials will reminisce about the “old days” when they had to have remote drone operators control the unmanned aircrafts overseas. As drone technology continues to progress, scientists are anticipating developing autonomous robots for the battlefield that can make decisions based on their own programming. Professor Ronald Arkin over at Georgia Tech said that drones will have better judgment than human soldiers:

 “It is not my belief that an unmanned system will be able to be perfectly ethical in the battlefield, but I am convinced that they can perform more ethically than human soldiers are capable of.”

 It is true that robots and computers will only do as they’re programmed to do and not have to deal with emotions or psychological disorders. Soldiers’ judgment might be clouded by fear, anger or simply apathy that might result in the death of innocent people. Drones don’t have to deal with any of that and simply ask and answer one question to themselves over and over again:  “1 or 0?”

The military also want technology that allows unmanned aircraft to be able to predict other aircrafts’ behavior. They want algorithms that will enable the aircraft to act on its own while factoring in its environment. They want to be able to automate and streamline the enemy-killing process.

The Pentagon has promised however not to allow these autonomous drones to kill anybody without authorization from a human. I’m not sure that makes me feel any more secure considering that hundreds or thousands of innocent people have already been killed in drone strikes under the Obama administration, with express approval from a variety of humans.

The U.S. should have access to the most sophisticated weapons technology there is available. My dad works at Georgia Tech in the Aerospace Engineering field. While he specializes in aeroelasticity (a word still not recognized by AutoCorrect) and rotorcraft, he also consults with the military and agencies like the FAA in helping to develop sophisticated weapons. This is an extremely important field, as it helps to ensure our national defense. We should be able to defend ourselves from any sort of attack and should be able to conduct reconnaissance on our enemies in a war situation. Drone technology does just those things and will continue to be very important in the future.

The problem with drones is the same as with any other form of technology. They can be misused to do really bad things if they end up in the wrong hands. It’s not the technology itself that is bad. Our government is bad. I don’t like the idea that drone strikes routinely kill people who are of no threat to anyone at all and who are not connected to any sort of terrorist organization. Innocent men, women and children do die as collateral damage in drone strikes, and making drones more autonomous will only seem to make matters worse.