Pages

Monday, 10 June 2024

NVIDIA brings the world closer to AI everywhere

NVIDIA founder and CEO Jensen Huang made a number of data centre and enterprise announcements during his COMPUTEX keynote address.

Revolutionary model development

The world’s 28 million developers can now download NVIDIA NIM — inference microservices that provide models as optimised containers — to deploy on clouds, data centres or workstations, giving them the ability to easily build generative AI applications for copilots, chatbots and more, in minutes rather than weeks. 

These new generative AI applications are becoming increasingly complex and often utilise multiple models with different capabilities to generate text, images, video, speech, and more. NVIDIA NIM increases developer productivity by providing a simple, standardised way to add generative AI to applications.

NIM also enables enterprises to maximise their infrastructure investments. For example, running Meta Llama 3-8B in a NIM produces up to 3x more generative AI tokens on accelerated infrastructure than without NIM. This lets enterprises boost efficiency and use the same amount of compute infrastructure to generate more responses. 

Nearly 200 technology partners — including Cadence, Cloudera, Cohesity, DataStax, NetApp, Scale AI and Synopsys — are integrating NIM into their platforms to speed generative AI deployments for domain-specific applications, such as copilots, code assistants and digital human avatars.

“Every enterprise is looking to add generative AI to its operations, but not every enterprise has a dedicated team of AI researchers,” said Jensen Huang, founder and CEO of NVIDIA. 

“Integrated into platforms everywhere, accessible to developers everywhere, running everywhere — NVIDIA NIM is helping the technology industry put generative AI in reach for every organisation.”

Enterprises can deploy AI applications in production with NIM through the NVIDIA AI Enterprise software platform. They can use NIM to run applications for generating text, images and video, speech and digital humans. Starting July, members of the NVIDIA Developer Program can access NIM for free for research, development and testing on their preferred infrastructure.

Developers can now access NVIDIA NIM microservices for Meta Llama 3 models from the Hugging Face AI platform. This lets developers easily access and run the Llama 3 NIM in just a few clicks using Hugging Face Inference Endpoints, powered by NVIDIA GPUs on their preferred cloud. 

With new NVIDIA ACE NIM microservices, developers can also easily build and operate interactive, lifelike digital humans in applications for customer service, telehealth, education, gaming and entertainment. 

NIM containers are prebuilt to speed model deployment for GPU-accelerated inference and can include NVIDIA CUDA software, NVIDIA Triton Inference Server and NVIDIA TensorRT-LLM software.

Over 40 NVIDIA and community models are available to experience as NIM endpoints on ai.nvidia.com, including Databricks DBRX, Google’s open model Gemma, Meta Llama 3, Microsoft Phi-3, Mistral Large, Mixtral 8x22B and Snowflake Arctic.

With NVIDIA BioNeMo NIM microservices for digital biology, researchers can build novel protein structures to accelerate drug discovery. Dozens of healthcare companies are deploying NIM to power generative AI inference across a range of applications, including surgical planning, digital assistants, drug discovery and clinical trial optimisation.

Platform providers including Canonical, Red Hat, Nutanix and VMware (acquired by Broadcom) are supporting NIM on open-source KServe or enterprise solutions. AI application companies Hippocratic AI, Glean, Kinetica and Redis are also deploying NIM to power generative AI inference.

Leading AI tools and machine learning operations (MLOps) partners — including Amazon SageMaker, Microsoft Azure AI, Dataiku, DataRobot, deepset, Domino Data Lab, LangChain, Llama Index, Replicate, Run.ai, Saturn Cloud, Securiti AI and Weights & Biases — have also embedded NIM into their platforms.

Global system integrators and service delivery partners Accenture, Deloitte, Infosys, Latentview, Quantiphi, SoftServe, Tata Consultancy Services (TCS) and Wipro have further created NIM competencies to help the world’s enterprises quickly develop and deploy production AI strategies.

Enterprises can run NIM-enabled applications virtually anywhere, including on NVIDIA-certified systems from global infrastructure manufacturers Cisco, Dell Technologies, Hewlett-Packard Enterprise, Lenovo and Supermicro, as well as server manufacturers ASRock Rack, ASUS, GIGABYTE, Ingrasys, Inventec, Pegatron, QCT, Wistron and Wiwynn. NIM microservices have also been integrated into Amazon Web Services, Google Cloud, Azure and Oracle Cloud Infrastructure.

Industry leaders Foxconn, Pegatron, Amdocs, ServiceNow and Siemens are among the businesses using NIM for generative AI applications in manufacturing, healthcare, financial services, retail, customer service and more, NVIDIA added:

- Foxconn — the world’s largest electronics manufacturer — is using NIM in the development of domain-specific LLMs embedded into a variety of internal systems and processes in its AI factories for smart manufacturing, smart cities and smart electric vehicles.

~- Pegatron — a Taiwanese electronics manufacturing company — is leveraging NIM for Project TaME, a Taiwan Mixtral of Experts model designed to advance the development of local LLMs for industries.

- Amdocs — a leading global provider of software and services to communications and media companies — is using NIM to run a customer billing LLM that significantly lowers the cost of tokens, improves accuracy by up to 30% and reduces latency by 80%, driving near real-time responses.

- ServiceNow — the AI platform for business transformation — announced earlier this year that it was one of the first platform providers to access NIM to enable fast, scalable and more cost-effective LLM development and deployment for its customers. NIM microservices are integrated within the Now AI multimodal model and are available to customers that have ServiceNow’s generative AI experience, Now Assist, installed.

- Siemens — a global technology company focused on industry, infrastructure, transport and healthcare — is integrating its operational technology with NIM microservices for shop floor AI workloads. It is also building an on-premises version of its Industrial Copilot for Machine Operators using NIM.

Explore

Developers can experiment with NVIDIA microservices at ai.nvidia.com at no charge. Enterprises can deploy production-grade NIM microservices with NVIDIA AI Enterprise running on NVIDIA-certified systems and leading cloud platforms. Starting July, members of the NVIDIA Developer Program will gain free access to NIM for research and testing.

Industry joins NVIDIA to build AI factories and data centres

Source: NVIDIA Youtube channel, COMPUTEX keynote. Huang shares the partnerships just for Blackwell alone. Company logos listed onscreen.
Source: NVIDIA Youtube channel, COMPUTEX keynote. Huang shares the partnerships just for Blackwell alone.

NVIDIA and the world’s top computer manufacturers additionally unveiled an array of NVIDIA Blackwell architecture-powered systems featuring Grace CPUs, NVIDIA networking and infrastructure for enterprises to build AI factories and data centres.

ASRock Rack, ASUS, GIGABYTE, Ingrasys, Inventec, Pegatron, QCT, Supermicro, Wistron, and Wiwynn will deliver cloud, on-premises, embedded and edge AI systems using NVIDIA GPUs and networking.

NVIDIA supercharges Ethernet for generative AI

NVIDIA also announced widespread adoption of the NVIDIA Spectrum-X Ethernet networking platform as well as an accelerated product release schedule.

CoreWeave, GMO Internet Group, Lambda, Scaleway, STPX Global, and Yotta are among the first AI cloud service providers embracing NVIDIA Spectrum-X to bring extreme networking performance to their AI infrastructures. Several NVIDIA partners have announced Spectrum-based products, including ASRock Rack, ASUS, GIGABYTE, Ingrasys, Inventec, Pegatron, QCT, Wistron and Wiwynn, joining Dell Technologies, Hewlett Packard Enterprise, Lenovo, and Supermicro in incorporating the platform into their offerings.

1 comment:

  1. FULLZ AVAILABLE PROS INFO DEAD FULLZ CC CVV DUMPS DL FRONT BACK T3l3gr@m:leadsupplier

    USA UK CANADA
    SSN NIN SIN DOB ADDRESS
    Fresh spammed & Recent updated databases
    Legit info with guarantee
    Bad info will be replaced

    Here we are

    T3l3 Gr@m : (at)killhacks , (at)leadsupplier
    Wh@ts @pp: +1.. 727.. 788.. 6129..
    SKyp3: (at)peeterhacks
    I C Q: (at)killhacks , 752822040
    Em@il: bigbull0334 (at) 0ni0n m@il . org

    USA SSN DOB DL ADDRESS EMPLOYEE & BANK INFO
    UK NIN DOB DL ADDRESS SORTCODE BANK INFO
    CANADA SIN DOB DL INFO WITH MMN
    REAL ID|DL FRONT BACK WITH SELFIE USA UK CANADA
    BUSINESS EIN COMPANY PROS USA
    YOUNG & OLD FULLZ WITH ALL INFO
    HIGH CREDIT SCORE FULLZ INFO PROS
    PASSPORT PHOTOS WITH SELFIE
    USA DL|ID FRONT BACK WITH SELFIE & SSN
    BULK FULLZ AVAILABLE USA UK CANADA
    DEAD FULLZ
    KYC UBEREATS DOORDASH ACCOUNT OPENING INFO
    FULLZ FOR LOAN & CREDIT CARD APPLYING

    CC WITH CVV
    DUMPS WITH PIN TRACK 101 & 202
    LOAN METHODS|CAH OUT METHODS
    CARDING METHODS WITH GUIDES & VIDEO TUTORIALS
    DUMPS CASHOUT METHODS
    CC DUMPS LOGINS SPAMMING METHODS
    SMTP|RDP|C-PANEL|WEB-MAILER
    SCAM PAGES|SCAM PAGES SCRIPTING

    Providing stuff with guarantee & replacement if anything found invalid
    Payment upfront|Accepted only in crypto
    NO refund only replacement
    Sampling are just for serious buyers

    ReplyDelete