“’The fact that [customers are] connected to the internet of things opens the door to the broader spectrum of payment activity. That allows us to do is to build deeper profiles, [which AI can use] to better predict if [a transaction] is something [the customer] would do,’ Monroe said.
This influx of new data means banks must look beyond authenticating login credentials. Users’ payment histories, how they hold their phones and type in their passwords, as well as other factors should be considered during the authentication process.”
- “Though the methods above are tricky for a hacker to complete, artificial intelligence could end up making the process of stealing your identity much easier. Researchers at New York University have created a tool that can generate fake fingerprints to unlock mobile devices.
- Even more so, researchers have demonstrated how deep artificial neural networks can be trained over time to recreate faces, or just create faces in general. What is stopping someone from using these same tools to access your world?”
“The issues of ethics in AI revolve around four key tenets: security, transparency, fairness and liability. Each of these concerns, when addressed, promote trust in the application of artificial intelligence. In other words, the trust generated by ethical use can be viewed as a driving factor to AI acceptance by consumers.
The Facebook Cambridge Analytica scandal brought our data into the spotlight, and GDPR made us think about where our personal data is going.
Google Duplex is a perfect example of the role that transparency plays in the reasonable person’s acceptance of an AI tool. Initially, Google Duplex faced rejection from many due to lack of transparency.
And, as Amazon’s recruitment AI program shows, the reasonable person cares about this bias as well. The ecommerce giant rejected its AI recruitment tool when it taught itself to be sexist against women. The result was a widespread backlash against the use of AI in recruitment.
Artificial intelligence is now making its way into oncology. Here, an incorrect recommendation has a serious impact on the individual involved. When doctors viewed AI recommendations as unsafe, many hospitals pulled the plug.”
“PelicanSecure brings together tools that use natural language processing and machine learning to analyse patterns of behaviour to flag up “subtle anomalies” pointing to instances of fraud.
Factors like user location, spending patterns and unusual device configuration are all integrated into Pelican’s detection system.
Describing the role AI has to play in turning the tide against scammers, CEO Parth Desai says: “Traditional fraud detection methods are reactive in nature, meaning if a fraudster came up with a new idea to defraud an organization, the existing rules will fail to prevent it.
“AI, on the other hand, predicts those behaviours and protects against trending and future fraud typologies.
“AI, including machine learning, is unquestionably the future of fraud detection. Financial institutions are shifting towards AI gradually.
‘We believe the industry is still in the launching phase of this technology and that there is a lot more to explore over the course of the coming few years.'”
Image Credit: Geralt / Pixabay
- “It is critical that organisations charged with protecting customer data take better steps to ensure that any cyber-attacks are quickly identified and stopped before damage is done. They should look towards smarter systems that utilise artificial intelligence to effectively and accurately identify genuine cyber threats in real time.
- It would be surprising if none of Marriott’s security tools had detected this attack over the past 4 years, but the alert may not have been prioritised amongst all of the noise, causing the security team to miss it.
- But the ability to instantly identify a threat’s location isn’t the only advantage to using AI. This is because artificial intelligence can also closely follow the path of an attack across devices and networks, building an accurate picture of the threat, using all the information gathered from its multiple senses.
“One of the most challenging AI technologies for security teams is a very new class of algorithms called generative adversarial networks (GANs). In a nutshell, GANs can imitate or simulate any distribution of data, including biometric data.
To oversimplify how GANs work, they involve pitting one neural network against a second neural network in a kind of game. One neural net, the generator, tries to simulate a specific kind of data and the other, the discriminator, judges the first one’s attempts against real data — then informs the generator about the quality of its simulated data. As this progresses, both neural networks learn. The generator gets better at simulating data, and the discriminator gets better at judging the quality of that data. The product of this “contest” is a large amount of fake data produced by the generator that can pass as the real thing.
GANs are best known as the foundational technology behind those deep fake videos that convincingly show people doing or saying things they never did or said. Applied to hacking consumer security systems, GANs have been demonstrated — at least, in theory — to be keys that can unlock a range of biometric security controls.”
- “Machine learning is a very powerful technique for security—it’s dynamic, while rules-based systems are very rigid,” says Dawn Song, a professor at the University of California at Berkeley’s Artificial Intelligence Research Lab. “It’s a very manual intensive process to change them, whereas machine learning is automated, dynamic and you can retrain it easily.”
- “We will see an improved ability to identify threats earlier in the attack cycle and thereby reduce the total amount of damage and more quickly restore systems to a desirable state,” says Amazon Chief Information Security Officer Stephen Schmidt.
- A Microsoft system designed to protect customers from fake logins had a 2.8 percent rate of false positives
- To do a better job of figuring out who is legit and who isn’t, Microsoft technology learns from the data of each company using it, customizing security to that client’s typical online behavior and history. Since rolling out the service, the company has managed to bring down the false positive rate to .001 percent. “
- “A study by Accenture has predicted that growth in the AI healthcare space is expected to touch $6.6 Bn by 2021 with a CAGR of 40%.
- A report by Juniper Research states that chatbots will be responsible for saving $8 Bn per annum of costs by 2022 for Retail, ecommerce, Banking, and Healthcare
- The same research study also predicts that the success of chatbot interactions where no human interventions take place will go up to 75% in 2022 from 12% in 2017.
- In 2017, Scanadu developed doc.ai. The application takes away one task from doctors and assigns it to the AI – the job of interpreting lab results.
- Medical image diagnosis is another AI use case in healthcare. One of the most significant issues that medical practitioners face is sifting through the volume of information available to them, thanks to EMRs and EHRs.
- Artificial Intelligence in Healthcare also talks about deep learning. Researchers are using deep learning to train machines to identify cancerous tissues with an accuracy comparable to a trained physicist.
- Machine learning in healthcare can help enhance the efforts in pathology often traditionally left to pathologists as they often have to evaluate multiple images in order to reach a diagnosis after finding any trace of abnormalities.
- Another similar solution is Moon developed by Diploid which enables early diagnosis of rare diseases through the software, allowing doctors to begin early treatment.
- Cybersecurity has become a significant concern for healthcare organizations, threatening to cost them $380 per patient record.
- The AiCure app developed by The National Institutes of Health helps monitor medication by a patient.”
“Hyderabad: Artificial Intelligence is getting good at doing bad things swiftly, evident from the alerts put out by leading cybersecurity companies that attackers won’t just target AI systems but will create AI techniques themselves to amplify their own criminal activities.
Although AI will help automate manual tasks, enhance decision-making and other human activities, it can attack many systems including AI.
Instead of hackers finding loopholes, AI itself can search for undiscovered vulnerabilities that it can exploit.
For instance it can be used to make phishing and other social engineering attacks even more sophisticated by creating extremely realistic video and audio or well-crafted emails designed to fool individuals. AI could also be used to launch disinformation campaigns.
Researchers have been rising increasingly concerned about the vulnerability of these artificially intelligent systems to malicious input that can corrupt their logic and affect their operations.
The World Economic Forum came out with a report, last week on Adversarial AI, cautioning governments: “Changes in the threat landscape are already apparent. Criminals are already harnessing automated reconnaissance, target exploitation and network penetration end-to-end”. Experts noted that attackers will be employing AI to avoid detection by security software and will even automate target selection, and check infected environments before deploying later stages and avoiding detection.
Chief technology officer, Symantec, Mr Hugh Thompson, said, ‘In some ways, the emergence of critical AI systems as attack targets will start to mirror the sequence seen 20 years ago with the internet, which rapidly drew the attention of cybercriminals and hackers, especially following the explosion of internet-based eCommerce. The fragility of some AI technologies will become a growing concern in 2019.'”
- “Data mining is not true AI (more about that in just a bit), but how it is used illustrates another important trend involving AI and ML: the correlation between a bank’s size and the sophistication of its learning systems, with larger banks typically using more sophisticated systems than smaller ones. When it comes to data mining, for instance, 95 percent of large banks and 79 percent of mid-sized banks use it, the report found. Meanwhile, just 61 percent of small banks reported using data mining technology — a majority, but not nearly as prevalent as it is among larger FIs.
- True AI systems, by contrast, are used by only 5.5 percent of financial institutions, as their interviews were used to help construct the report’s findings. Far more popular — besides data mining — were less sophisticated technologies, including BRMS, which enables companies to easily define, deploy, monitor and maintain new regulations, procedures, policies, market opportunities and workflows.”