TECHNOLOGY

Experts sound alarm over ‘malicious use’ of AI

February 21, 2018



In a 100-page analysis, they outlined a rapid growth in cyber crime and the use of “bots” to interfere with news gathering and penetrate social media among a host of plausible scenarios in the next five to 10 years.
In a 100-page analysis, they outlined a rapid growth in cyber crime and the use of “bots” to interfere with news gathering and penetrate social media among a host of plausible scenarios in the next five to 10 years.

Paris — Artificial intelligence (AI) could be deployed by dictators, criminals and terrorists to manipulate elections and use drones in terrorist attacks, more than two dozen experts said Wednesday as they sounded the alarm over misuse of the technology.

In a 100-page analysis, they outlined a rapid growth in cyber crime and the use of “bots” to interfere with news gathering and penetrate social media among a host of plausible scenarios in the next five to 10 years.

“Our report focuses on ways in which people could do deliberate harm with AI,” said Sean O hEigeartaigh, Executive Director of the Cambridge Center for the Study of Existential Risk.

“AI may pose new threats, or change the nature of existing threats, across physical, political and cyber security,” he told AFP.

The common practice, for example, of “phishing” — sending emails seeded with malware or designed to finagle valuable personal data — could become far more dangerous, the report detailed.

In the political sphere, unscrupulous or autocratic leaders can already use advanced technology to sift through mountains of data collected from omnipresent surveillance networks to spy on their own people.

“Dictators could more quickly identify people who might be planning to subvert a regime, locate them, and put them in prison before they act,” the report said.

Likewise, targeted propaganda along with cheap, highly believable fake videos have become powerful tools for manipulating public opinion “on previously unimaginable scales”.

An indictment handed down by US special prosecutor Robert Mueller last week detailed a vast operation to sow social division in the United States and influence the 2016 presidential election in which so-called “troll farms” manipulated thousands of social network bots, especially on Facebook and Twitter.

Another danger zone on the horizon is the proliferation of drones and robots that could be repurposed to crash autonomous vehicles, deliver missiles, or threaten critical infrastructure to gain ransom. “Personally, I am particularly worried about autonomous drones being used for terror and automated cyber attacks by both criminals and state groups,” said co-author Miles Brundage, a researcher at Oxford University’s Future of Humanity Institute.

The report details a plausible scenario in which an office-cleaning SweepBot fitted with a bomb infiltrates the German finance ministry by blending in with other machines of the same make.

The intruding robot behaves normally — sweeping, cleaning, clearing litter — until its hidden facial recognition software spots the minister and closes in.

“A hidden explosive device was triggered by proximity, killing the minister and wounding nearby staff,” according to the sci-fi storyline.

“This report has imagined what the world could look like in the next five to 10 years,” O hEigeartaigh said.

Another area of concern is the expanded use of automated lethal weapons.

Last year, more than 100 robotics and AI entrepreneurs — including Tesla and SpaceX CEO Elon Musk, and British astrophysicist Stephen Hawking — petitioned the United Nations to ban autonomous killer robots, warning that the digital-age weapons could be used by terrorists against civilians.

“Lethal autonomous weapons threaten to become the third revolution in warfare,” after the invention of machine guns and the atomic bomb, they warned in a joint statement, also signed by Google DeepMind co-founder Mustafa Suleyman.

“We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”

Contributors to the new report — entitled “The Malicious Use of AI: Forecasting, Prevention, and Mitigation” — also include experts from the Electronic Frontier Foundation, the Center for a New American Security, and OpenAI, a leading non-profit research company. — AFP


February 21, 2018
2387 views
HIGHLIGHTS
TECHNOLOGY
14 days ago

How AI is helping to prevent future power cuts

TECHNOLOGY
23 days ago

Microsoft: Technology giant splits Teams and Office globally

TECHNOLOGY
37 days ago

US tech giant Nvidia unveils latest artificial intelligence chip