![]() |
Jensen Huang, CEO and founder, NVIDIA, reveals how the CUDA-X libraries are structured at GTC. |
Data scientists can expect a speed boost for machine learning and data science workloads with NVIDIA’s new CUDA-X artificial intelligence (AI) libraries by as much as 50x. Introduced today at NVIDIA’s GPU Technology Conference (GTC), CUDA-X AI is the only end-to-end platform for the acceleration of data science, the company said.
“All the software is free, available
on the NGC platform, our container registry, tested and validated,”
said Ian Buck, VP and GM of Accelerated Computing in a media pre-brief session.
CUDA-X AI consists of more than a dozen specialised acceleration libraries. CUDA-X accelerates:
● Data science from ingest of data, to extracting, transferring and loading it (ETL), to model training, to deployment.
● Machine learning algorithms for regression, classification, clustering.
● Deep learning training frameworks, automatically optimising them for NVIDIA Tensor Core GPUs.
● Inference and large-scale Kubernetes deployments in the cloud.
● Data science in on premises, in the cloud, and in enterprise data centres.
● Data science in Amazon Web Services (AWS), Google Cloud and Microsoft Azure AI services.
CUDA-X AI is already accelerating data analysis with the CuDF GPU DataFrame library for manipulating data, deep learning primitives with the NVIDIA CUDA Deep Neural Network library (CuDNN); machine learning algorithms with cuML, GPU-accelerated versions of common algorithms used in machine learning; and data processing with the NVIDIA Data Loading Library (DALI), among others. Together, these libraries accelerate every step in a typical AI workflow.
CUDA-X AI is integrated into major deep learning frameworks such as TensorFlow, PyTorch and MXNet and already in use by companies such as Microsoft, PayPal, and SAS. They can be deployed everywhere, including desktops, workstations, servers and on cloud computing platforms.
Explore:
CUDA-X AI acceleration libraries are freely available as individual downloads or as containerised software stacks from the NVIDIA NGC software hub.
The software is integrated into the new data science workstations also announced at GTC today, and the new NVIDIA T4 servers announced at the same time are optimised to run CUDA-X AI.
Download CUDA-X AI
Hashtag: #GTC19
*NVIDIA sponsored transport and accommodation for GTC.
NVIDIA CUDA (Compute Unified Device Architecture) is a parallel computing platform and application programming interface (API) model created by NVIDIA. Deep Learning Projects for Final Year It enables developers to use NVIDIA GPUs (graphics processing units) for general-purpose processing (GPGPU). CUDA is widely used in fields such as scientific computing, machine learning Machine Learning Projects for Final Year , and data processing due to its ability to accelerate computational tasks.
ReplyDeleteCUDA Programming Language: CUDA extends C and C++ with additional keywords and constructs to support parallel computing. This allows developers to write code that runs on the GPU.
Cloud Computing Projects Final Year Projects
Nice explanation.
ReplyDelete