NVIDIA announced that Facebook will power its next-generation computing system with the NVIDIA Tesla Accelerated Computing Platform, enabling it to drive a broad range of machine learning applications.
While training complex deep neural networks to conduct machine learning can take days or weeks on even the fastest computers, the Tesla platform can slash this by 10 to 20x. As a result, developers can innovate more quickly and train networks that are more sophisticated, delivering improved capabilities to consumers.
Facebook is the first company to adopt NVIDIA Tesla M40 GPU accelerators, introduced last month, to train deep neural networks. The addition of Tesla M40 GPUs will help Facebook make new advancements
in machine learning research and enable teams across its organisation to
use deep neural networks in a variety of products and services. They will play a key role in Facebook artificial intelligence (AI) Research’s (FAIR) purpose-built
Big Sur computing platform, a system designed specifically for neural
network training.
“Deep learning has started a new era in computing,” said Ian Buck, VP of accelerated computing at NVIDIA. “Enabled by big data and powerful GPUs, deep learning algorithms can solve problems never possible before. Huge industries from web services and retail to healthcare and cars will be revolutionised.. Our goal is to provide researchers and companies with the most productive platform to advance this exciting work.”
In addition to reducing neural network training time, GPUs offer a number of other advantages.
Their architectural compatibility from generation to generation provides seamless acceleration for
future GPU upgrades. The Tesla platform’s growing global adoption also facilitates open collaboration with researchers around the world, fuelling new waves of discovery and innovation
in the machine learning field.
NVIDIA worked with Facebook engineers on the design of Big Sur, optimising it to deliver
maximum performance for machine learning workloads, including the training of large neural
networks across multiple Tesla GPUs. Two times faster than Facebook’s existing system, Big Sur
will enable the company to train twice as many neural networks – and to create neural networks
that are twice as large – which will help develop more accurate models and new classes of
advanced applications.
“The key to unlocking the knowledge necessary to develop more intelligent machines lies in the
capability of our computing systems,” said Serkan Piantino, Engineering Director for FAIR. “Most
of the major advances in machine learning and AI in the past few years have been contingent on tapping into powerful GPUs and huge data sets to build and train advanced models.”
Big Sur represents the first time a computing system specifically designed for machine learning
and AI research will be released as an open source solution. Committed to doing its AI work in the open and sharing its findings with the community, Facebook intends to work with its partners to open source Big Sur specifications via the Open Compute Project. This unique approach will make it easier for AI researchers worldwide to share and improve techniques, enabling future innovation in machine learning by harnessing the power of GPU accelerated computing.
No comments:
Post a Comment