Slaughterbots a Dystopian Future of Autonomous Killer Drones
Nov 16, 2017 2:20:51 GMT
AkashicRebel likes this
Post by Mad Scientist on Nov 16, 2017 2:20:51 GMT
Slaughterbots Video Depicts a Dystopian Future of Autonomous Killer Drones.
Produced by an Elon Musk and Stephen Hawking-backed organization, the video appears as members of the UN convene in Geneva to discuss autonomous weapons.
A graphic new video posits a very scary future in which swarms of killer microdrones are dispatched to kill political activists and US lawmakers. Armed with explosive charges, the palm-sized quadcopters use real-time data mining and artificial intelligence to find and kill their targets.
The makers of the seven-minute film titled Slaughterbots are hoping the startling dramatization will draw attention to what they view as a looming crisis — the development of lethal, autonomous weapons, select and fire on human targets without human guidance.
The Future of Life Institute, a nonprofit organization dedicated to mitigating existential risks posed by advanced technologies, including artificial intelligence, commissioned the film. Founded by a group of scientists and business leaders, the institute is backed by AI-skeptics Elon Musk and Stephen Hawking, among others.
The institute is also behind the Campaign to Stop Killer Robots, a coalition of NGOs, which have banded together to called for a preemptive ban on lethal autonomous weapons.
The timing of the video is deliberate. The film will be screened this week at the United Nations in Geneva during a meeting of the Convention on Certain Conventional Weapons. Established in 1980, the convention is a series of framework treaties that prohibits or restricts weapons considered to cause unnecessary or unjustifiable suffering. For example, the convention enacted a 1995 protocol banning weapons, such as lasers, specifically designed to cause blindness.
As of 2017, 125 nations have pledged to honor the convention’s resolutions, including all five permanent members of the UN Security Council — China, France, Russia, the United Kingdom, and the United States.
The Campaign to Stop Killer Robots is hosting a series of meetings at this year's event to propose a worldwide ban on lethal autonomous weapons, which could potentially be developed as flying drones, self-driving tanks, or automated sentry guns. While no nation or state is openly deploying such weaponry, it's widely assumed that various military groups around the world are developing lethal weapons powered by artificial intelligence.
Advocates for a ban on lethal, autonomous weapons argue there is a clear moral imperative: Machines should never decide when a human lives or dies.
UN Hosting Conference on Autonomous Weapons link here.
The technologies depicted in the short film are all based on viable systems that are up and running today, such as facial recognition, automated targeting, and weaponized aerial drones.
“This short film is more than just speculation,” said Stuart Russell, professor of computer science at the University of California, Berkeley, and a pioneer in the field of artificial intelligence. “It shows the results of integrating and miniaturizing technologies we already have.”
Representatives from more than 70 states are expected to attend the Geneva meeting on lethal autonomous weapons systems this week, according to a statement from the Campaign to Stop Killer Robots. Representatives from the scientific and technical communities will be stating their case to the assembled delegates.
“Allowing machines to choose to kill humans will be devastating to our security and our freedom,” Russell says in a short commentary at the end of the video. “Thousands of my fellow researchers agree. We have an opportunity to prevent the future you just saw, but the window to act is closing fast.”
Source:
Produced by an Elon Musk and Stephen Hawking-backed organization, the video appears as members of the UN convene in Geneva to discuss autonomous weapons.
A graphic new video posits a very scary future in which swarms of killer microdrones are dispatched to kill political activists and US lawmakers. Armed with explosive charges, the palm-sized quadcopters use real-time data mining and artificial intelligence to find and kill their targets.
The makers of the seven-minute film titled Slaughterbots are hoping the startling dramatization will draw attention to what they view as a looming crisis — the development of lethal, autonomous weapons, select and fire on human targets without human guidance.
The Future of Life Institute, a nonprofit organization dedicated to mitigating existential risks posed by advanced technologies, including artificial intelligence, commissioned the film. Founded by a group of scientists and business leaders, the institute is backed by AI-skeptics Elon Musk and Stephen Hawking, among others.
The institute is also behind the Campaign to Stop Killer Robots, a coalition of NGOs, which have banded together to called for a preemptive ban on lethal autonomous weapons.
The timing of the video is deliberate. The film will be screened this week at the United Nations in Geneva during a meeting of the Convention on Certain Conventional Weapons. Established in 1980, the convention is a series of framework treaties that prohibits or restricts weapons considered to cause unnecessary or unjustifiable suffering. For example, the convention enacted a 1995 protocol banning weapons, such as lasers, specifically designed to cause blindness.
As of 2017, 125 nations have pledged to honor the convention’s resolutions, including all five permanent members of the UN Security Council — China, France, Russia, the United Kingdom, and the United States.
The Campaign to Stop Killer Robots is hosting a series of meetings at this year's event to propose a worldwide ban on lethal autonomous weapons, which could potentially be developed as flying drones, self-driving tanks, or automated sentry guns. While no nation or state is openly deploying such weaponry, it's widely assumed that various military groups around the world are developing lethal weapons powered by artificial intelligence.
Advocates for a ban on lethal, autonomous weapons argue there is a clear moral imperative: Machines should never decide when a human lives or dies.
UN Hosting Conference on Autonomous Weapons link here.
The technologies depicted in the short film are all based on viable systems that are up and running today, such as facial recognition, automated targeting, and weaponized aerial drones.
“This short film is more than just speculation,” said Stuart Russell, professor of computer science at the University of California, Berkeley, and a pioneer in the field of artificial intelligence. “It shows the results of integrating and miniaturizing technologies we already have.”
Representatives from more than 70 states are expected to attend the Geneva meeting on lethal autonomous weapons systems this week, according to a statement from the Campaign to Stop Killer Robots. Representatives from the scientific and technical communities will be stating their case to the assembled delegates.
“Allowing machines to choose to kill humans will be devastating to our security and our freedom,” Russell says in a short commentary at the end of the video. “Thousands of my fellow researchers agree. We have an opportunity to prevent the future you just saw, but the window to act is closing fast.”
Source: