Joe Burton, a professor at Lancaster University, UK, contends that AI and algorithms are more than mere tools used by national security agencies to thwart malicious online activities.
In a research paper recently published in the Technology in Society Journal, Burton suggests that AI and algorithms can also fuel polarisation, radicalism, and political violence, thereby becoming a threat to national security themselves. “AI is often framed as a tool to be used to counter violent extremism. Here is the other side of the debate,” said Burton.
The paper looks at how AI has been securitised throughout its history, and in media and popular culture depictions, and by exploring modern examples of AI having polarising, radicalising effects that have contributed to political violence.
The research cites the classic film series, The Terminator, which depicted a holocaust committed by a ‘sophisticated and malignant’ AI, as doing more than anything to frame popular awareness of AI and the fear that machine consciousness could lead to devastating consequences for humanity – in this case a nuclear war and a deliberate attempt to exterminate a species.
“This lack of trust in machines, the fears associated with them, and their association with biological, nuclear, and genetic threats to humankind has contributed to a desire on the part of governments and national security agencies to influence the development of the technology, to mitigate risk and to harness its positive potentiality,” Burton said.
The role of sophisticated drones, such as those being used in the war in Ukraine, are, says Burton, now capable of full autonomy including functions such as target identification and recognition.
While there has been a broad and influential campaign debate, including at the UN, to ban ‘killer robots’ and to keep humans in the loop when it comes to life-or-death decision-making, the acceleration and integration into armed drones has, he says, continued apace.
In cyber security – the security of computers and computer networks – AI is being used in a major way with the most prevalent area being (dis)information and online psychological warfare, Burton said.
During the pandemic, he said, AI was seen as a positive in tracking and tracing the virus but it also led to concerns over privacy and human rights. The paper examines AI technology itself, arguing that problems exist in its design, the data that it relies on, how it is used, and its outcomes and impacts. “AI is certainly capable of transforming societies in positive ways but also presents risks which need to be better understood and managed,” Burton added.