NVIDIA has been releasing foundation models and blueprints to accelerate AI adoption in multiple industries. According to the company, the next AI frontier is physical AI.
![]() |
Source: NVIDIA blog post. NVIDIA Cosmos pretrained world foundation models are designed to generate high-fidelity virtual environments for physical AI development. |
Physical AI models can understand instructions and perceive, interact and perform complex actions in the real world to power autonomous machines like robots and self-driving cars, NVIDIA explained. To create such models, training must be conducted in simulation environments to convey concepts such as gravity, friction, and inertia, together with geometric and spatial relationships, and the principles of cause and effect.
At CES the company announced Mega, an Omniverse blueprint for developing, testing and optimising physical AI and robot fleets at scale before real-world deployment. Mega offers enterprises a reference architecture of NVIDIA accelerated computing, AI, NVIDIA Isaac and NVIDIA Omniverse technologies to develop and test digital twins.
The twins are used for testing AI-powered robot brains that drive robots, video analytics AI agents, equipment and more when handling enormous complexity and scale. The new framework brings software-defined capabilities to physical facilities, enabling continuous development, testing, optimisation and deployment.
Also announced at CES was NVIDIA Cosmos, a platform of state-of-the-art generative world foundation models, advanced tokenisers, guardrails and an accelerated video processing pipeline. Developers can use Omniverse to create 3D scenarios, then feed the outputs into Cosmos to generate controlled videos and variations. This can accelerate the development of physical AI systems such as autonomous vehicles and robots by rapidly generating exponentially more training data covering a variety of environments and interactions, NVIDIA explained.
The NVIDIA AI blueprint for retail shopping assistants was also announced in January. This generative AI reference workflow designed to transform shopping experiences online and in stores.
Built on the NVIDIA AI Enterprise and NVIDIA Omniverse platforms, this blueprint helps developers create AI-powered digital assistants that work alongside and support human workers. Using NVIDIA NeMo microservices provided within the blueprint, these shopping assistants can understand text- and image-based prompts, search for multiple items simultaneously, complete complicated tasks such as creating a travel wardrobe, and answer contextual questions like whether a product is waterproof.
Developers can even use the Omniverse platform in conjunction with a spatial-scanning solution to enable AI agents to present products in physically-accurate virtual environments. For example, customers looking to buy a couch could preview how the furniture would look in their own living room, NVIDIA said.
In the field of healthcare, NVIDIA has partnered IQVIA, a provider of clinical research services, commercial insights and healthcare intelligence which operates in over 100 countries, to build custom foundation models and agentic AI workflows that can accelerate research, clinical development and access to new treatments.
AI applications trained on the organisation’s healthcare-specific information and guided by its deep domain expertise will help the industry boost the efficiency of clinical trials and optimise planning for launches of therapies and medical devices.
No comments:
Post a Comment