“The issues of ethics in AI revolve around four key tenets: security, transparency, fairness and liability. Each of these concerns, when addressed, promote trust in the application of artificial intelligence. In other words, the trust generated by ethical use can be viewed as a driving factor to AI acceptance by consumers.
The Facebook Cambridge Analytica scandal brought our data into the spotlight, and GDPR made us think about where our personal data is going.
Google Duplex is a perfect example of the role that transparency plays in the reasonable person’s acceptance of an AI tool. Initially, Google Duplex faced rejection from many due to lack of transparency.
And, as Amazon’s recruitment AI program shows, the reasonable person cares about this bias as well. The ecommerce giant rejected its AI recruitment tool when it taught itself to be sexist against women. The result was a widespread backlash against the use of AI in recruitment.
Artificial intelligence is now making its way into oncology. Here, an incorrect recommendation has a serious impact on the individual involved. When doctors viewed AI recommendations as unsafe, many hospitals pulled the plug.”