"Creating killer robots is a very, very bad idea." Creating killer robots is a very, very bad idea.

Elon Musk recently expressed his strong opposition to AI being used to create killer robots. We are not talking about Terminators yet, but about robotic systems capable of performing some tasks that are usually the responsibility of soldiers. The military's interest in this topic is understandable, but their far-reaching plans frighten many.

But not only modern warriors dream and see machine guns that can replace ten, or even a hundred soldiers at the same time. These thoughts visited the heads of the leaders different eras. Sometimes some ideas were realized, and they looked pretty good.

Robot Knight Da Vinci


Leonardo was a genius in almost every field. He managed to achieve success in almost all areas in which he showed interest. In the 15th century, he created a robot knight (of course, the word “robot” was not in use then).

The machine was able to sit, stand, walk, move its head and arms. The creator of the mechanical knight achieved all this with the help of a system of levers, gears and gears.

The Knight was re-created in our era - a working prototype was built in 2002. It was created “based on” the Da Vinci project by Mark Rosheim.

Tesla RC Boat


In 1898, inventor Nicola Testa showed the world the first of its kind invention - controlled remotely vehicle(small boat). The demonstration was held in New York. Tesla controlled the boat, and it maneuvered, performing various actions as if by magic.

Tesla later tried to sell his other invention to the US military - something like a radio-controlled torpedo. But for some reason the military refused. True, he described his creation not as a torpedo, but as a robot, a mechanical man who is capable of performing difficult work instead of their creators.

Radio controlled tanks of the USSR



Yes, engineers Soviet Union they were not cut out for it. In 1940 they created radio-controlled combat vehicles on the base light tank T-26. The operating range of the control panel is more than a kilometer.

The operators of these military terminators could open fire from machine guns, use a cannon and a flamethrower. True, the disadvantage of this technology was that Feedback was absent. That is, the operator could only directly observe the actions of the tank at a distance. Naturally, the efficiency of the operator’s actions in this case was relatively low.

This is the first example of a military robot in action.

Goliath


The Nazis created something similar, but instead of equipping conventional tanks with radio control, they created miniature tracked systems. They could be controlled remotely. The Goliaths were started with explosives. The idea was as follows: a nimble kid made his way to an “adult” enemy tank and, once nearby, carried out the operator’s command to destroy everything with an explosion. The Germans created both an electric version of the system and a mini-tank with an internal combustion engine. In total, about 7,000 such systems were produced.

Semi-automatic anti-aircraft guns


These systems were also developed during World War II. The founder of cybernetics, Norbert Wiener, had a hand in their creation. He and his team were able to create anti-aircraft systems, who adjusted the accuracy of the fire themselves. They were equipped with technology that made it possible to predict where enemy aircraft would appear next.

Smart weapons of our time


In the 1950s, the US military, seeking to win Vietnam War, for the first time created laser-guided weapons, as well as autonomous aerial devices, in fact, drones.

True, they required human help in choosing a target. But it was already close to what it is now.

Predator


Probably everyone has heard about these drones. The MQ-1 Predator was introduced by the US military a month after the events of 9/11. Now Predators are the most common military drones in the world. They also have older relatives - the Reaper UAV.

Sappers


Yes, in addition to killer robots, there are also sapper robots. Now they are very common, they began to be used several years ago, in Afghanistan and other hot spots. By the way, these robots were developed by iRobot - it is the company that creates the most popular cleaning robots in the world. We are, of course, talking about Roomba and Scooba. In 2004, 150 of these robots (not vacuum cleaners, but sappers) were produced, and four years later - already 12,000.

Now the military has completely dispersed. Artificial intelligence (its weak form) promises great opportunities. The US is going to take full advantage of these opportunities. Here we are creating a new generation of killer robots, with cameras, radars, lidars and weapons.

It is they who frighten Elon Musk, and with him many other bright minds from the most different areas activities.

While Prime Minister Dmitry Medvedev and Arkady Volozh were driving an unmanned Yandex.Taxi around Skolkovo, military engineers were figuring out how to adapt unmanned vehicle technologies to create new weapons.

In reality, technology is not quite what it seems. The problem with all technological evolution is that the line between commercial robots “for life” and military killer robots is incredibly thin, and it costs nothing to cross it. For now, they are choosing a route, and tomorrow they will be able to choose which target to destroy.

This is not the first time in history that technological progress has called into question the very existence of humanity: first, scientists created chemical, biological and nuclear weapon, now - “autonomous weapons”, that is, robots. The only difference is that until now weapons were considered inhumane." mass destruction- that is, not choosing whom to kill. Today the perspective has changed: a weapon that will kill with particular discrimination, choosing victims according to its own taste, seems much more immoral. And if any warlike power was stopped by the fact that if it used biological weapons, everyone around them would suffer, then with robots everything is more complicated - they can be programmed to destroy a specific group of objects.

In 1942, when American writer Isaac Asimov formulated the Three Laws of Robotics, it all seemed exciting but completely unrealistic. These laws stated that a robot could not and should not harm or kill a human being. And they must unquestioningly obey the will of man, except in cases where his orders would contradict the above imperative. Now that autonomous weapons have become a reality and may well fall into the hands of terrorists, it turns out that programmers somehow forgot to put Asimov’s laws into their software. This means that robots can pose a danger, and no humane laws or principles can stop them.

The Pentagon-developed missile detects targets on its own thanks to software, artificial intelligence (AI) identifies targets for the British military, and Russia displays unmanned tanks. To develop robotic and autonomous military equipment V various countries Colossal amounts of money are spent, although few people want to see it in action. Like most chemists and biologists, they are not interested in their discoveries ultimately being used to create chemical or biological weapons, and most AI researchers are not interested in creating weapons based on it, because then a serious public outcry would harm their research programs.

In his speech at the beginning General Assembly United Nations in New York on September 25 Secretary General Antonio Guterres called AI technology a "global risk" along with climate change and rising income inequality: "Let's call a spade a spade," he said. “The prospect of machines determining who lives is disgusting.” Guterres is probably the only one who can urge the military departments to come to their senses: he previously dealt with conflicts in Libya, Yemen and Syria and served as High Commissioner for Refugees.

The problem is that when further development robots will be able to decide for themselves who to kill. And if some countries have such technologies and others do not, then uncompromising androids and drones will predetermine the outcome of a potential battle. All this contradicts all of Asimov's laws at the same time. Alarmists may be seriously worried that a self-learning neural network will get out of control and kill not only the enemy, but all people in general. However, the prospects for even completely obedient killer machines are not at all bright.

The most active work in the field of artificial intelligence and machine learning today is not in the military, but in the civilian sphere - at universities and companies like Google and Facebook. But much of this technology can be adapted for military use. This means that a potential ban on research in this area will also affect civilian developments.

In early October, the American non-governmental organization Stop Killer Robots Campaign sent a letter to the United Nations demanding that the development of autonomous weapons be limited at the international legislative level. The UN made it clear that it supported this initiative, and in August 2017 Elon Musk and the participants joined it International conference United Nations on Artificial Intelligence (IJCAI). But in fact, the United States and Russia oppose such restrictions.

The last meeting of the 70 countries party to the Convention on Certain Conventional Weapons (inhumane weapons) took place in Geneva in August. Diplomats have been unable to reach consensus on how global AI policy could be implemented. Some countries (Argentina, Austria, Brazil, Chile, China, Egypt and Mexico) expressed support for a legislative ban on the development of robotic weapons; France and Germany proposed introducing a voluntary system of such restrictions, but Russia, the USA, South Korea and Israel have stated that they have no intention of limiting the research and development being carried out in this area. In September, Federica Mogherini, a senior official European Union on questions foreign policy and security policy, stated that guns "affect our collective security“Therefore, the decision on the issue of life and death must in any case remain in the hands of man.

Cold War 2018

US defense officials believe autonomous weapons are necessary for the United States to maintain its military advantage over China and Russia, which are also investing in similar research. In February 2018, Donald Trump demanded $686 billion for the country's defense next year. financial year. These costs have always been quite high and decreased only under the previous President Barack Obama. However, Trump - unoriginally - argued the need to increase them by technological competition with Russia and China. In 2016, the Pentagon budget allocated $18 billion for the development of autonomous weapons over three years. This is not much, but here you need to take into account one very important factor.

Most AI development in the US is carried out by commercial companies, so it is widely available and can be sold commercially to other countries. The Pentagon does not have a monopoly on advanced machine learning technologies. The American defense industry no longer conducts its own research in the same way as it did during cold war", but uses the developments of startups from Silicon Valley, as well as Europe and Asia. At the same time, in Russia and China, such research is under the strict control of defense departments, which, on the one hand, limits the influx of new ideas and the development of technology, but, on the other, guarantees government funding and protection.

According to experts The New York Times, Military Spending on Autonomous Military Vehicles and Drones aircrafts will exceed $120 billion over the next decade. This means that the debate ultimately comes down not to whether to create autonomous weapons, but to what degree of independence to give them.

Today, fully autonomous weapons do not exist, but Vice Chairman of the Joint Chiefs of Staff General Paul J. Selva of the Air Force said back in 2016 that within 10 years the United States will have the technology to create weapons that can independently decide who and when to kill. And while countries debate whether to restrict AI or not, it may be too late.

Clearpath Robotics was founded six years ago by three college friends who shared a passion for making things. The company's 80 specialists are testing rough-terrain robots like Husky, a four-wheeled robot used by the US Department of Defense. They also make drones and even built a robotic boat called Kingfisher. However, there is one thing they will never build for sure: a robot that can kill.

Clearpath is the first and so far only robotics company to pledge not to create killer robots. The decision was made last year by the company's co-founder and CTO, Ryan Garipay, and in fact attracted experts to the company who liked Clearpath's unique ethical stance. Ethics of robot companies Lately comes to the forefront. You see, we have one foot in the future where killer robots exist. And we are not yet ready for them.

Of course, there is still a long way to go. Korean Dodam systems, for example, is building an autonomous robotic turret called Super aEgis II. It uses thermal imaging cameras and laser rangefinders to identify and attack targets at a distance of up to 3 kilometers. The US is also reportedly experimenting with autonomous missile systems.

Two steps away from the Terminators

Military drones like the Predator are currently piloted by humans, but Garipay says they will become fully automatic and autonomous very soon. And this worries him. Very. “Deadly autonomous weapons systems could be rolling off the assembly line now. But lethal weapons systems that will be made in accordance with ethical standards are not even in the plans.”

For Garipay, the problem is international rights. In war, there are always situations in which the use of force seems necessary, but it can also endanger innocent bystanders. How to create killer robots that will take right decisions in any situation? How can we determine for ourselves what the right decision should be?

We are already seeing similar problems in the example of autonomous transport. Let's say a dog runs across the road. Should a robot car swerve to avoid hitting a dog but putting its passengers at risk? What if it’s not a dog, but a child? Or a bus? Now imagine a war zone.

“We can't agree on how to write a manual for a car like this,” says Garipay. “And now we also want to move to a system that should independently decide whether to use lethal force or not.”

Make cool things, not weapons

Peter Asaro has spent the last few years lobbying for a ban on killer robots in the international community as the founder of International Committee on control of robotic armies. He believes that the time has come for “a clear international ban on their development and use.” This, he says, will allow companies like Clearpath to continue making cool stuff "without worrying that their products could be used to violate people's rights and threaten civilians."

Autonomous missiles are of interest to the military because they solve a tactical problem. When drones remote control, for example, operate in combat environments, the enemy often jams the sensors or network connection so that the human operator cannot see what is happening or control the drone.

Garipay says that instead of developing missiles or drones that can independently decide which target to attack, the military should spend money on improving sensors and anti-jamming technology.

“Why don't we take the investments that people would like to make to build autonomous killer robots and put them into making existing technologies more efficient? - he says. “If we set the challenge and overcome this barrier, we can make this technology work for the benefit of people, not just the military.”

Recently, conversations about the dangers of artificial intelligence have also become more frequent. Elon Musk worries that runaway AI could destroy life as we know it. Last month, Musk donated $10 million to artificial intelligence research. One of important issues about how AI will impact our world is how it will merge with robotics. Some, like Baidu researcher Andrew Ng, worry that the coming AI revolution will take people's jobs away. Others like Garipay fear it could take their lives.

Garipay hopes that his fellow scientists and machine builders will think about what they are doing. That's why Clearpath Robotics took the side of the people. “While we as a company can’t put $10 million on it, we can put our reputation on it.”

A large gathering of scientists, industry leaders and NGOs have launched a campaign to stop killer robots, dedicated to preventing the development of combat autonomous weapons systems. Among those who signed up were: Stephen Hawking, Noam Chomsky, Elon Musk and Steve Wozniak.

These big names are generating a lot of attention and lending legitimacy to the fact that killer robots, once considered science fiction, are in fact fast approaching reality.

An interesting study published in the International Journal of Cultural Research takes a different approach to the idea of ​​"killer robots" as a cultural concept. Researchers argue that even the most advanced robots are just machines, like everything else humanity has ever made.

“The thing is, killer robot’ as an idea didn’t come out of thin air,” said co-author Tero Karppi, an assistant professor of media theory at the University at Buffalo. “This was preceded by methods and technologies to make thinking and development of these systems possible.”

In other words, we worry about killer robots. The authors explore the theme of killer robots in films such as The Terminator or I, Robot, in which they theorize that far in the future, robots will end up enslaving the human race.

“Over recent decades, the expanded use of unmanned weapons has dramatically changed warfare, bringing new humanitarian and legal challenges. There has now been rapid advancement in technology, resulting from efforts to develop fully autonomous weapons. These robotic weapons will have the ability to select fire on a target independently, without any human intervention."

The researchers respond that these alarmist dystopian scenarios reflect a “techno-deterministic” worldview, where technological systems give too much autonomy, which can be destructive not only for society, but for the entire human race.

But what if we coded machine intelligence in such a way that robots couldn't even tell the difference between a human and a machine? It's an intriguing idea: if there is no "us" and "them" there can be no "us versus them."

Indeed, Karppi suggested that we may be able to control how future machines will think about people on a fundamental level.

If we want to make changes in the development of these systems, now is the time. Simply ban lethal autonomous weapons and address the root causes of this dilemma. To truly avoid the development of autonomous killing machines.

We rode the unmanned Yandex. Taxi" at Skolkovo, military engineers figured out how to adapt unmanned vehicle technologies to create new weapons.

In reality, technology is not quite what it seems. The problem with all technological evolution is that the line between commercial robots “for life” and military killer robots is incredibly thin, and it costs nothing to cross it. For now, they are choosing a route, and tomorrow they will be able to choose which target to destroy.

This is not the first time in history when technological progress calls into question the very existence of humanity: first, scientists created chemical, biological and nuclear weapons, now - “autonomous weapons,” that is, robots. The only difference is that until now weapons of “mass destruction” were considered inhumane - that is, they do not choose who they kill. Today the perspective has changed: a weapon that will kill with particular discrimination, choosing victims according to its own taste, seems much more immoral. And if any warlike power was stopped by the fact that if it used biological weapons, everyone around them would suffer, then with robots everything is more complicated - they can be programmed to destroy a specific group of objects.

In 1942, when American writer Isaac Asimov formulated the Three Laws of Robotics, it all seemed exciting but completely unrealistic. These laws stated that a robot could not and should not harm or kill a human being. And they must unquestioningly obey the will of man, except in cases where his orders would contradict the above imperative. Now that autonomous weapons have become a reality and may well fall into the hands of terrorists, it turns out that programmers somehow forgot to put Asimov’s laws into their software. This means that robots can pose a danger, and no humane laws or principles can stop them.

A Pentagon-developed missile detects targets itself thanks to software, artificial intelligence (AI) identifies targets for the British military, and Russia demonstrates unmanned tanks. Colossal amounts of money are being spent on the development of robotic and autonomous military equipment in various countries, although few people want to see it in action. Just as most chemists and biologists are not interested in their discoveries eventually being used to create chemical or biological weapons, most AI researchers are not interested in creating weapons based on them, because then serious public outcry would harm their research programs.

In his speech at the start of the United Nations General Assembly in New York on September 25, Secretary-General Antonio Guterres called AI technology a “global risk” along with climate change and growing income inequality: “Let’s call a spade a spade,” he said. “The prospect of machines determining who lives is disgusting.” Guterres is probably the only one who can urge the military departments to come to their senses: he previously dealt with conflicts in Libya, Yemen and Syria and served as High Commissioner for Refugees.

The problem is that with further development of technology, robots will be able to decide who to kill. And if some countries have such technologies and others do not, then uncompromising androids and drones will predetermine the outcome of a potential battle. All this contradicts all of Asimov's laws at the same time. Alarmists may be seriously worried that a self-learning neural network will get out of control and kill not only the enemy, but all people in general. However, the prospects for even completely obedient killer machines are not at all bright.

The most active work in the field of artificial intelligence and machine learning today is not in the military, but in the civilian sphere - at universities and companies like Google and Facebook. But much of this technology can be adapted for military use. This means that a potential ban on research in this area will also affect civilian developments.

In early October, the American non-governmental organization Stop Killer Robots Campaign sent a letter to the United Nations demanding that the development of autonomous weapons be limited at the international legislative level. The UN has made it clear that it supports this initiative, and in August 2017 Elon Musk and participants in the UN International Conference on the Use of Artificial Intelligence (IJCAI) joined it. But in fact, the United States and Russia oppose such restrictions.

The last meeting of the 70 countries party to the Convention on Certain Conventional Weapons (inhumane weapons) took place in Geneva in August. Diplomats have been unable to reach consensus on how global AI policy could be implemented. Some countries (Argentina, Austria, Brazil, Chile, China, Egypt and Mexico) expressed support for a legislative ban on the development of robotic weapons, France and Germany proposed introducing a voluntary system of such restrictions, but Russia, the USA, South Korea and Israel stated that they were not going to limit the research and development that is being done in this area. In September, Federica Mogherini, the European Union's top foreign and security policy official, said that weapons "affect our collective security" and that matters of life and death must always remain in the hands of the individual.

Cold War 2018

US defense officials believe autonomous weapons are necessary for the United States to maintain its military advantage over China and Russia, which are also pouring money into similar research. In February 2018, Donald Trump demanded $686 billion for the country's defense in the next fiscal year. These costs have always been quite high and decreased only under the previous President Barack Obama. However, Trump - unoriginally - argued the need to increase them by technological competition with Russia and China. In 2016, the Pentagon budget allocated $18 billion for the development of autonomous weapons over three years. This is not much, but here you need to take into account one very important factor.

Most AI development in the US is carried out by commercial companies, so it is widely available and can be sold commercially to other countries. The Pentagon does not have a monopoly on advanced machine learning technologies. The American defense industry no longer conducts its own research as it did during the Cold War, but uses the work of startups in Silicon Valley, as well as Europe and Asia. At the same time, in Russia and China, such research is under the strict control of defense departments, which, on the one hand, limits the influx of new ideas and the development of technology, but, on the other, guarantees government funding and protection.

The New York Times estimates that military spending on autonomous military vehicles and unmanned aerial vehicles will exceed $120 billion over the next decade. This means that the debate ultimately comes down not to whether to create autonomous weapons, but to what degree of independence to give them.

Today, fully autonomous weapons do not exist, but Vice Chairman of the Joint Chiefs of Staff General Paul J. Selva of the Air Force said back in 2016 that within 10 years the United States will have the technology to create weapons that can independently decide who and when to kill. And while countries debate whether to restrict AI or not, it may be too late.

Views