“Data suggests that artificial intelligence was put to work in a cybersecurity capacity more than in any other area of service last year. With the use of deep learning algorithms, AI is leading the charge on a secure internet.
Specifically in detecting and deterring security intrusions, with 44 percent of all AI applications being used for that purpose.
Azure Sentinel, Microsoft’s cloud-based SIEM (security information and event management) tool is one such example of an artificial intelligence that doesn’t just automate security operations tasks but uses algorithms to learn from its actions. Like any AI, the more data it receives, the more accurate its decisions are in identifying threats and responding to them.”
Usually, hackers access traditional software systems to steal data. They hack industrial control systems and misguide them to do the wrong action(s). However, the core of AI systems has mainly algorithms and not much data. This created the illusion of absolute security by nature among some people, as nothing inside to steal. However, instead of stealing data, cyber attackers can feed AI systems with wrong data to manipulate their ability to take the right decisions. For example, attackers could access Electronic Medical Records (EMR) to add or remove medical conditions in MRI scans which will lead to the wrong diagnosis by ML algorithms. Same could happen to financial data or the operational data of critical equipment in a Nuclear Power Plant (NPP) or a smart grid.”
“In the US, the government is planning to deploy face recognition at 20 airports by 2021.
In June 2018, the suspect in the shooting at a local newspaper in Annapolis, Maryland, was identified using facial recognition technology. 55% of Americans indeed approve the use of facial recognition for public safety.
Last year, the police of New Delhi has been able to find 3,000 missing children in just four days thanks to a new facial recognition system.
In China, the police has been partnering with SenseNets, a company developing a facial recognition software, to bring charges against people involved in illegal gatherings in the province of Guangdong.
Last year, Amazon’s facial recognition systems falsely matched 28 members of Congress to criminal mugshots.”