Pages

Tuesday, 15 July 2014

NVIDIA processors power tomorrow's real-life applications

There’s been a quiet revolution in the high performance computing sector in the past few years, and it may well extend to enterprises soon. NVIDIA®, a pioneer in the art and science of visual computing, has seen a growing number of its processors adopted by the majority of supercomputers in the TOP500 Project’s list of top 500 supercomputers around the world.

The latest TOP500 list, issued in June 2014, notes that the No. 2 system, Titan, and the No. 6 system, Piz Daint, use NVIDIA graphics processor units (GPUs) to accelerate computation, while 44 of the 62 systems on the list which use accelerator/co-processor technology use NVIDIA chips. The Green 500 list, also released in June 2014, notes that 15 of the top 15 green supercomputers are accelerated with NVIDIA Kepler K20 GPUs.

Wait, you might say. Graphics processors accelerate computation? That’s right. Companies such as Adobe, Autodesk, MathWorks and Wolfram Research are now making use of the GPU for general-purpose scientific and engineering computing across a range of platforms.

It turns out that the traditional central processing unit (CPU) that is traditionally recognised as the brain of the computer is optimised for single-threaded performance, or instructions processed one after another. The GPU, on the other hand, has the advantage of processing several instructions simultaneously, or in parallel. Software that is specially written to be used with GPUs can run sequential parts of their workload on the CPU and also take advantage of parallel processing on the GPU.

While GPUs still render graphics ever more realistically over time, they have also grown in number-crunching capability. A GPU from NVIDIA can handle a teraflop, or a million million (10
12) floating-point operations per second (flops), and instead of handling graphical pixels for those operations, it can process dollars in finance, or chemical interactions in science; anything really.

“While CPUs have continued to increase in performance, so have GPUs. We continue to push the technical limits of where the GPU is going in the future,” Marc Hamilton,VP, Solution Architecture & Engineering, NVIDIA said during NVIDIA’s GPU Technology Conference South East Asia Workshop 2014 on 10 July in Singapore. “Traditional CPU power simply hasn't scaled. Today, top end GPUs can process more than 3,000 CUDA
cores in parallel.”



CUDA, or Compute Unified Device Architecture, is NVIDIA’s parallel computing platform and programming model. It has been included in NVIDIA GPUs for the past six to seven years, and is how applications take advantage of the GPU’s parallel power. Companies with huge volumes of cloud data are likely to need such computing power, Hamilton said.



Hamilton noted that a 2012 experiment by Google to teach a type of computer model called a neural network to learn by watching Youtube videos was comprised of 1,000 CPU-based servers and 2,000 CPUs, which provided 16,000 cores. One node of that network grew to recognise cats, all by itself. The Stanford AI Lab recreated the experiment in 2013 with NVIDIA GPUs. The difference in numbers is staggering. The computer needed required only three GPU-accelerated servers and 12 GPUs, which provided 18,432 cores to power a neural network that had 9 million nodes. The costs are definitely different, too. Google’s hardware cost US$5 million, and required 600 kWatts of electricity. Stanford’s hardware, on the other hand, cost US$33,000, and used up 4 kWatts.

GPU architecture continues to evolve. In March 2014, NVIDIA announced that it will enable GPUs and CPUs to share data five to 12 times faster than they can today with NVIDIA NVLink, a faster way to exchange information across the different chips. The innovation paves the way for systems which are 50 to 100 times faster than today's most powerful hardware. The company will also ship the Pascal chip in 2016 to provide more memory in the same space. NVIDIA’s latest incarnation of CUDA, CUDA 6, has been designed with more ways to accelerate computing, including a way for the GPU and the CPU to share memory called unified memory. In the past, a copy of the same data had to be stored once for the GPU and once for the CPU.

Such high end applications have nothing to do with real life, surely. As it happens, NVIDIA chips are found in Tesla’s cars as well, and can be used for automated driving applications such as detecting obstacles on the road, driverless cars, and cars which can automatically park themselves.
 
Source: NVIDIA website.

The NVIDIA Tegra K1 mobile processor has 192 supercomputer-class GPU cores and is already in Unreal Engine 4, Epic's game engine for powering next-gen games. “You would need eight NVIDIA Tegra K1s to drive a car today, but by the end of decade we would have it down to a single mobile processor. Think about being able to get out of your car, pushing a button on your smartphone and watch it park,” Hamilton said. “Over 5.1 million cars on the road have an NVIDIA processor in the car today.”

And for the cloud, NVIDIA has developed NVIDIA GRID, GPU-accelerated games and applications delivered through the cloud to any user with more high quality graphics and more swiftly than previously possible. At the Singapore event, the company announced that a GRIDTest Drive is now available for the Southeast Asia and Australia markets.

 
One is a photograph; but which? NVIDIA technology can render pictures
so photorealistically, you can't tell.

But what about using NVIDIA for graphics? NVIDIA continues to innovate for professional graphics applications, too. With NVIDIA's iray® GPU-accelerated, physically-correct, photo-realistic rendering solution, it's hard to tell which is a photograph, and which a rendered graphic today. An iray demonstration at the Workshop showed how different combinations of hardware could affect the quality of rendering. All of the video was delivered over NVIDIA GRID without appreciable delay.




When more numbers can be crunched more quickly, all sorts of new applications become possible. Some will be scientific and will require supercomputers. But others will simply make our lives easier. Look out for the NVIDIA name under the lid of the technology. You’ll see it more often in the coming years.

No comments:

Post a Comment