How Facebook deals with the fact AI is a mess on smartphones

“Apple, they write, stands out: its “Metal” API for iOS runs on a consistent chip platform and the GPUs in those chips are higher-performance, on average, “making Metal on iOS devices with GPUs an attractive target for efficient neural network inference.” Even then, however, the results of a “rigorous” examination of the speed of inference across six generations of Apple’s “A” series chips shows that within each generation of chip there is still “wide performance variability.”

“Programmability is a primary roadblock to using mobile co-processors/accelerators,” they write. 

The newest version of Facebook’s “PyTorch” framework, unveiled this year at the company’s developer conference, is designed to “accelerate AI innovation by streamlining the process of transitioning models developed through research exploration into production scale with little transition overhead.” It also supports the “Open Neural Network Exchange,” or ONNX, specification backed by Microsoft and others.”

https://www.google.com/amp/s/www.zdnet.com/google-amp/article/how-facebook-deals-with-the-fact-ai-is-a-mess-on-smartphones/

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s