- AMD EPYC processors and new AMD Instinct MI100 accelerator redefine performance for high-performance computing (HPC) and scientific research
- Support for next-generation AMD EPYC processors, codenamed Milan, in new HBv3 virtual machines announced by Microsoft
- AMD Instinct MI100 accelerator is first accelerator to use new AMD CDNA architecture
At this year’s SC20 virtual tradeshow, AMD is raising the bar on HPC. The company has launched the new AMD Instinct MI100 accelerator with ROCm 4.0 open ecosystem support and showcased a growing list of AMD EPYC CPU and AMD Instinct accelerator-based deployments, and also highlighted its collaboration with Microsoft Azure for HPC in the cloud. AMD also said it is on track to begin volume shipments of the 3rd Gen EPYC processors with the Zen 3 core to selected HPC and cloud customers this quarter, in advance of the expected public launch in Q121, aligned with OEM availability.
The Instinct MI100 accelerator enables a new class of accelerated systems and delivers true heterogeneous compute capabilities from AMD for HPC and artificial intelligence (AI). Designed to complement the 2nd Gen AMD EPYC processors, and built on the AMD Infinity Architecture, the AMD Instinct MI100 accelerator is the world’s fastest HPC GPU accelerator for scientific workloads and the first to surpass the 10 teraflops (FP64) performance barrier*.
Built on the new AMD CDNA architecture, the AMD Instinct MI100 GPU enables a new class of accelerated systems for HPC and AI when paired with 2nd Gen AMD EPYC processors. Supported by new accelerated compute platforms from Dell, HPE, Gigabyte and Supermicro, the MI100, combined with AMD EPYC CPUs and ROCm 4.0 software, is designed to propel new discoveries ahead of the exascale era. CDNA or Compute DNA architecture delivers nearly 1.7x higher FP64 performance (up to 11.5 TFLOPs of double precision [FP64] peak performance) within the same power budget compared to previous generation AMD accelerators.
“No two customers are the same in HPC, and AMD is providing a path to today’s most advanced technologies and capabilities that are critical to support their HPC work, from small clusters on premise, to virtual machines in the cloud, all the way to exascale supercomputers,” said Forrest Norrod, Senior VP and GM, Data Center and Embedded Solutions Business Group, AMD.
“Combining AMD EPYC processors and Instinct accelerators with critical application software and development tools enables AMD to deliver leadership performance for HPC workloads.”
Microsoft Azure is using 2nd Gen AMD EPYC processors to power its HBv2 virtual machines (VMs) for HPC workloads. These VMs offer up to 2x the performance of first-generation HB-series virtual machines**, can support up to 80,000 cores for Message Passing Interface (MPI) jobs***, and take advantage of 2nd Gen AMD EPYC processors’ up to 45% more memory bandwidth than comparable x86 alternatives****. MPI is a way of communicating between various components in HPC infrastructure so that workloads can run in parallel across different connected systems.
HBv2 powered by 2nd Gen AMD EPYC processors also provides the bulk of the CPU compute power for the OpenAI supercomputing environment Microsoft announced in May 2020.
AMD EPYC processors have also helped HBv2 reach new cloud HPC milestones, such as a new record for cloud MPI scaling results with molecular dynamics simulation software NAMD, the top 20 results on the Graph500 supercomputing rating platform, and the first 1 TBps cloud HPC parallel file system. Across these and other application benchmarks, HBv2 is delivering 12x higher scaling than found elsewhere on the public cloud.
Adding on to its existing HBv2 HPC virtual machine powered by 2nd Gen AMD EPYC processors, Azure announced it will utilise next-generation AMD EPYC processors, codenamed Milan, for future HB-series VM products for HPC.
AMD noted that its EPYC processors and Instinct accelerators have the performance and capabilities to support a wide range of HPC workloads, from small clusters at research centres and commercial HPC to off-premise and cloud implementations all the way to exascale computing. Since SC19, there have been more than 15 supercomputing systems announced using AMD EPYC CPUs, Instinct GPUs, or both, the company said.
In the Asia Pacific region, this includes Pawsey Supercomputing Centre, which is using the Hewlett Packard Enterprise (HPE) Cray EX supercomputer architecture and future AMD EPYC CPUs and AMD Instinct accelerators. The supercomputer at Pawsey is anticipated to be Australia’s most powerful supercomputer. When it was announced in October 2020, Pawsey said the new supercomputer, when fully commissioned, will feature a 30-fold increase in computing power compared to its predecessors, but have energy requirements that will only increase by 50%.
*Calculations conducted by AMD Performance Labs as of September 18, 2020 for the AMD Instinct MI100 (32 GB 2nd generation High Bandwidth Memory [HBM2] PCIe card) accelerator at 1,502 MHz peak boost engine clock resulted in 11.54 tera floating point operations per second (TFLOPS) peak double precision (FP64), 46.1 TFLOPS peak single precision matrix (FP32), 23.1 TFLOPS peak single precision (FP32), 184.6 TFLOPS peak half precision (FP16) peak theoretical, floating-point performance. Published results on the NVIDIA Ampere A100 (40 GB) GPU accelerator resulted in 9.7 TFLOPS peak double precision (FP64), 19.5 TFLOPS peak single precision (FP32), 78 TFLOPS peak half precision (FP16) theoretical, floating-point performance. Server manufacturers may vary configuration offerings yielding different results.
Details:
Watch a video about the AMD and Azure collaboration featuring Jason Zander, Executive VP, Microsoft Azure and Lisa Su, CEO of AMD.
**Source.
***Source.
****AMD EPYC 7002 Series processors have 45% more memory bandwidth than Intel Scalable processors in the same class.
No comments:
Post a Comment