What OpenAI and GitHub’s ‘AI pair programmer’ means for the software industry

Image Credit: Dragon Images/Shutterstock
  • “OpenAI has once again made the headlines, this time with Copilot, an AI-powered programming tool jointly built with GitHub. Built on top of GPT-3, OpenAI’s famous language model, Copilot is an autocomplete tool that provides relevant (and sometimes lengthy) suggestions as you write code.
  • There’s still no information on how much the official Copilot will cost. But hourly wages for programming talent start at around $30 and can reach as high as $150. Even saving a few hours of programming time or giving a small boost to development speed would probably be enough to cover the costs of Copilot.”


Microsoft teamed up with OpenAI to build a massive AI supercomputer in Azure

  • “At its Build developer conference, Microsoft today announced that it has teamed up with OpenAI…to create one of the world’s fastest supercomputers
  • Microsoft says that the 285,000-core machine would have ranked in the top five of the TOP500 supercomputer rankings.
  • To be in the top five of supercomputers, a machine would currently have to reach more than 23,000 teraflops per second. It’s also worth noting that the No. 1 machine, the IBM Power System-based Summit, reaches over 148,000 teraflops”


Microsoft invests $1 billion in artificial intelligence lab co-founded by Elon Musk

“The companies said Monday that they will build a hardware and software platform of “unprecedented scale” within Microsoft’s cloud service provider Azure that will train and run increasingly advanced AI models. Microsoft will also become OpenAI’s preferred partner for selling its technologies and the two will jointly develop Azure’s supercomputing technology. ”


AI Weekly: AI developers need to check themselves

“Encouragingly, OpenAI isn’t standing alone in this. Researchers from Google, Microsoft, and IBM joined forces in February to launch Responsible AI Licenses (RAIL), a set of end-user and source code license agreements with clauses restricting the use, reproduction, and distribution of potentially harmful AI technology. Julia Haines, a senior user experience researcher at Google in San Francisco, described RAIL as an “ever-evolving entity rooted in [conversation] with the broader community” — both to develop licenses and to stay abreast of emerging AI use cases.”