“Hyderabad: Artificial Intelligence is getting good at doing bad things swiftly, evident from the alerts put out by leading cybersecurity companies that attackers won’t just target AI systems but will create AI techniques themselves to amplify their own criminal activities.
Although AI will help automate manual tasks, enhance decision-making and other human activities, it can attack many systems including AI.
Instead of hackers finding loopholes, AI itself can search for undiscovered vulnerabilities that it can exploit.
For instance it can be used to make phishing and other social engineering attacks even more sophisticated by creating extremely realistic video and audio or well-crafted emails designed to fool individuals. AI could also be used to launch disinformation campaigns.
Researchers have been rising increasingly concerned about the vulnerability of these artificially intelligent systems to malicious input that can corrupt their logic and affect their operations.
The World Economic Forum came out with a report, last week on Adversarial AI, cautioning governments: “Changes in the threat landscape are already apparent. Criminals are already harnessing automated reconnaissance, target exploitation and network penetration end-to-end”. Experts noted that attackers will be employing AI to avoid detection by security software and will even automate target selection, and check infected environments before deploying later stages and avoiding detection.
Chief technology officer, Symantec, Mr Hugh Thompson, said, ‘In some ways, the emergence of critical AI systems as attack targets will start to mirror the sequence seen 20 years ago with the internet, which rapidly drew the attention of cybercriminals and hackers, especially following the explosion of internet-based eCommerce. The fragility of some AI technologies will become a growing concern in 2019.'”