AI acceptance and the man on the Clapham omnibus

“The issues of ethics in AI revolve around four key tenets: security, transparency, fairness and liability. Each of these concerns, when addressed, promote trust in the application of artificial intelligence. In other words, the trust generated by ethical use can be viewed as a driving factor to AI acceptance by consumers.


The Facebook Cambridge Analytica scandal brought our data into the spotlight, and GDPR made us think about where our personal data is going.


Google Duplex is a perfect example of the role that transparency plays in the reasonable person’s acceptance of an AI tool. Initially, Google Duplex faced rejection from many due to lack of transparency.


And, as Amazon’s recruitment AI program shows, the reasonable person cares about this bias as well.  The ecommerce giant rejected its AI recruitment tool when it taught itself to be sexist against women. The result was a widespread backlash against the use of AI in recruitment.


Artificial intelligence is now making its way into oncology. Here, an incorrect recommendation has a serious impact on the individual involved. When doctors viewed AI recommendations as unsafe, many hospitals pulled the plug.”

#MWC19: AI requires innovation, values, and trust

  • “During an MWC keynote, a range of experts and policymakers explained the keywords they believe are behind ensuring responsible AI deployments.
  • “With the combination of AI, automation, blockchain, 5G… we’re at a time when there’s a convergence coming together at scale for one of those moments which changes how business gets done,” he says.
  • ‘AI is not only dynamising economies and facilitating lives,’ says Gurria. ‘It’s also helping people make better predictions and better decisions; whether it’s the shop floor manager or a doctor in the operating room.'”

Facebook backs Institute for Ethics in Artificial Intelligence with $7.5 million

“Facebook will donate $7.5 million for the creation of The Institute for Ethics in Artificial Intelligence, a research center being made to explore topics such as transparency and accountability in medical treatment and human rights in human-AI interaction.

The announcement was made today during a speech by COO Sheryl Sandberg at the Digital Life Design (DLD) conference in Munich, Germany and is Facebook’s first investment in an independent center to study ethics in AI, a company spokesperson told VentureBeat in an email.”