Wed 15 October 2023:
The United Kingdom’s National Cyber Security Centre (NCSC) said on Tuesday (Nov 14) that artificial intelligence (AI) posed a threat to the upcoming national election. In its annual review, the NCSC said that this year saw the emergence of state-aligned actors as a new cyber threat to critical national infrastructure such as power, water and internet networks.
‘Russian actors sought to interfere in 2019 poll’
“The UK government assesses that it is almost certain that Russian actors sought to interfere in the 2019 general election,” NCSC said, and with the upcoming elections in the UK and the United States, “we can expect to see the integrity of our systems tested again.”
The centre also said that the UK and its allies could not be complacent to the threat of foreign cyber interference and attempts at influencing the democratic process.
In the UK, the next general election will take place before the end of January 2025. Local and mayoral elections are scheduled in May 2024.
The NCSC said that when the election happens, the process of casting a vote is completed using pencil and paper, significantly reducing the chances of a cyber actor affecting the integrity of the results.
“However, the act of voting marks the end of the sprint, as a significant amount of cyber-resilience building needs to take place before this to secure the services which support our elections and the integrity of an open public discourse,” it added.
The government has established the Joint Election Security Preparedness Unit (JESP), which takes overall responsibility for coordinating electoral security and drives the government’s election preparedness.
Threat posed by AI
According to the NCSC, technologies like AI might pose from those looking to interfere or otherwise undermine trust in the UK’s democratic system. It said the next election would be the first to take place against the backdrop of significant advances in AI.
However, rather than presenting new risks, AI’s ability to enable existing techniques posed the biggest threat.
The centre gave an example- “large language models will almost certainly be used to generate fabricated content, AI-created hyperrealistic bots will make the spread of disinformation easier and the manipulation of media for use in deepfake campaigns will likely become more advanced.”
The NCSC also said that even though the government was committed to countering the threat from online harms, it was important for people “to be aware that the threat landscape is changing and as with any kind of new technology, alongside realising the benefits, there is always potential for misuse.”