Google’s DeepMind asks what it means for AI to fail

“The authors did just that, in a paper published in December. “We learn an adversarial value function which predicts from experience which situations are most likely to cause failures for the agent.” The agent in this case refers to a reinforcement learning agent. 

“We then use this learned function for optimisation to focus the evaluation on the most problematic inputs.” They claim that the method leads to “large improvements over random testing” of reinforcement learning systems.”

https://www.zdnet.com/google-amp/article/theres-a-critical-human-role-in-the-evaluation-of-what-success-and-failure-means-in-ai/

Leave a Reply