Pages

06 April, 2017

Q&A: The state of artificial intelligence in the Asia Pacific region

Dr Simon See.
Dr See.
TechTrade Asia (TTA) caught up with NVIDIA's Dr Simon See (SS), Director and Chief Solution Architect for the NVIDIA AI Tech Center and Professor at Shanghai Jiaotong University (SJTU) and the King Mongkut'sUniversity of Technology Thonburi (KMUTT) after his keynote at IoT Asia 2017 to discuss the state of artificial intelligence (AI) in the Asia Pacific region, and NVIDIA's role in furthering progress in the field of AI. The company currently has over 1,000 people in China and more than 2,000 in India involved in research and development, and is boosting its research resources in Japan.

TTA: What's happening in the field of AI in the Asia Pacific region?

SS: It's been very exciting. In March, Fujitsu announced that it is using 24 NVIDIA DGX-1 AI systems to help build a supercomputer for RIKEN, Japan’s largest comprehensive research institution.

The leading countries are China, followed by Japan for AI. For computer vision and natural language processing, we're working with Baidu, Wechat, Tencent, Alibaba, iFlytek, a lot of companies in China. Some of these are doing natural language processing, some of these are doing computer vision. Tencent has an AI lab that is doing a lot of different projects.

There are also (related technologies) like robotics. China and Japan are into robotics bigtime. In China, a lot of companies are doing drones.

TTA: Computer vision, natural language processing, robotics and drones. How are these technologies applied in the real world?

SS: One technology which is pretty pervasive is intelligent video analytics – for surveillance. A lot of cities need to have cameras for surveillance, traffic control, and mob control. One of the challenges is (monitoring the) video. A human can only monitor a few cameras at the same time. If you have cameras in 10,000 HDB flats as in Singapore there is no way to a human to watch them 24x7.

What companies are developing is technology, using AI, to digest and ingest the video understand what the video is telling them. If I have a video running right now, (and) if I start to take out a bomb and start meddling with it, the AI will alert the head of police and say 'this guy is acting suspiciously'.

There's crowd control, people-counting. How do you control the crowd? The AI uses our technology. Hikvision, Cisco, Fujitsu and Huawei are all developing video surveillance applications based on NVIDIA technology.

TTA: What does it take to create an AI?

SS: If you want to do simple face recognition, you can make do with a workstation and a couple of GPUs; it has enough horsepower. But if you want to train a neural network (an AI), especially one that is very complex and one that is more intelligent to do very complex (work), you need to have very big data centre.

You also need to have specialised systems optimised to develop AI. There are deep learning frameworks to develop AI - Japan tends to use software called Chainer; in China it's Caffe and TensorFlow, and Singapore prefers Microsoft CNTK (Cognitive Flow), and TensorFlow. In order to run these effectively you need an optimised platform, you need a lot of servers that consume a lot of power, and shrink the space required to a reasonable size. That's why NVIDIA is building the DGX-1 to target AI training. That's just the training part.

TTA: What comes after the AI is trained?

SS: The inference part comes after that. When you send a picture to be (analysed), that is not training, that is prediction. They'll have an inference engine that uses different types of technologies. It uses GPUs, but doesn't need the same kind of horsepower as with training. That's why we created the Tesla P4 and P40 deep learning inference accelerators. These are low-energy and have pretty high performance. You can put in a lot of GPUs into one small server so a lot of searches can be done.

Once we train a neural network on a big system we can reduce it in size and put it into robots. The NVIDIA Jetson TX2 is made for embedded systems. It is extremely low-power and can be mounted in robots. You don't want to consume too much power as you want the robots to last for hours.

TTA: What other use cases do you see as appropriate for robots?

SS: Robots might construct a building like they can build a bridge. Waste management - how do you manage your waste using AIs? The human doesn't want to do it and we could use robots.

TTA: It's expensive, though.

SS: As time goes by the price will come down, like for the mobile phone. Our phones are now more powerful than a Cray of 20 years ago. As more people adopt it, the price will come down and it'll become more pervasive.

TTA: What could hinder wider adoption of AI? Is there a skills shortage for instance?

SS: That's why PhD graduates in AI command US$150,000 to US$200,000 a year. There is huge demand and not many people who know about it. A lot of countries are racing to produce engineers and scientists, like Singapore which just announced that it will train 10,000 public sector personnel on data science.

A lot of things are still a black art. There is a lot of guesswork but it is getting more mature. Think of how it was like in the days when the Internet was beginning, for people who knew how to do web pages and Java programming; AI is now at that stage. The question right now is: are we doing enough to train them?

TTA: Is NVIDIA involved in providing more people with the right skills?

SS: NVIDIA has the Deep Learning Institute (DLI) which offers labwork. Students have instructor-led classes online, from how to develop an application to developing a neural network. DLI has been rolling out in Taiwan, Korea, Japan, China, and Australia. In Malaysia we are working with the Malaysia Digital Economy Corporation (MDEC) for DLI and there'll be more announcements coming out soon. We have aggressive targets to train thousands of people so that they're capable of doing deep learning development. We offer basic and intermediate courses now that allow students to make simple apps, after which they can move on to developing some of the AI apps.

At the same time, we're working with the Singapore Economic Development Board to develop new engineers and scientists in this area. We will soon have something with the Government Technology Agency of Singapore (GovTech).

TTA: There are some fears that AIs can replace jobs. What are your thoughts on this?

SS: When we replace something, we will create something. When we created cars, horses are no longer in use, but we created jobs for people, to create cars. It is disruptive. The important thing for policy makers is not to think, 'I need to protect these jobs' but to train people to take up new skills. An AI may be your assistant in the future. It may be able to advise you, to give suggestions. It'll be more of augmentation than replacement.

For example, it's not going to replace lawyers. (US universities have developed an AI) to predict case outcomes – if you sue someone, what is probability of winning? There are so many factors behind it. We can now submit a case and the machine will return the probability of winning the litigation. In future it may be able to give suggestions on what you should do; the lawyer has an assistant lawyer.

TTA: Where next for AI?

SS: AI is command-based or predictive today. For example, air conditioning units can automatically adjust room temperature according to the number of people inside the room. What you want is suggestive AI – a computer that notices that you are down and suggests soothing music, or if you are very tired, suggests dimming the lights and turning on the TV.

There's also the whole area of 'explainable' AI. We take a picture right now and with natural language processing the AI can say it sees a picture and is able to say 'this girl is jumping over a bar'. We are not able to explain why it is concluding this. It is a black box, and unacceptable in certain industries – for example you must be able to explain it me that you see a girl because she is wearing a dress. This is a whole area of research. If a car gets into an accident, we would be able to ask 'why did you make this decision?' and the car would be able to answer, 'There was a cat on the left, and a rock on the right, and unfortunately I banged into the rock'.

Another concept is called transfer learning, and this is in its infancy. In my keynote at IoT Asia you saw multiple robots learning different things, which they will ultimately transfer to one another to improve their intelligence. We are still in the very early stages of transfer learning, and not at the production stage yet. This is something new.

TTA: What's NVIDIA doing to get AI to the next level?

SS: A single organisation doesn't have the capability to do it. We're collaborating with a lot of research institutes. The Cancer Distributed Learning Environment (CANDLE) project is with several organisations - the National Cancer Institute, the US Department of Energy and several US laboratories. We only can do one part of it. We need to have a collaborative environment.

We are building systems and software that allow the open source community to work together. NVIDIA works closely with OpenAI, (editor's note: a non-profit AI research company, associated with Elon Musk) in terms of developing software to help them. Geoffrey Hinton is Chief Scientific Adviser of the Vector Institute (editor's note: another non-profit research institution dedicated to the field of AI). There is no centralised effort. Everyone is taking a different approach and we're still learning.

We are only scratching the surface. In two years' time the whole algorithm scene will have changed, and we are going to design chips and infrastructure that're able to support those new algorithms.

TTA: Could the end of Moore's Law* be an obstacle to this?

SS: We have slowed down. It is difficult to squeeze more juice out of a single core. NVIDIA's latest board has 3,800 cores. We don't intend to increase the frequency but we are increasing the number of cores and designing new semiconductor technology so we can increase cores without increasing the power consumption. The algorithms run is a massively parallel manner. Parallelism is the way. All the chip designers right now are going into multicore technology.

TTA: Where do you see quantum computing in the AI game?

SS: The newest quantum computer from D-Wave (costs) US$10 million for one machine. On paper it is extremely good - the thing right now is when we are able to bring the cost down so it can be used as a pervasive platform. It may be more powerful, but a GPU costs you a few hundred dollars, and you can stack them up. For US$10 million you can build a lot of GPU clusters. 

Interested?

Read the TechTrade Asia blog posts on:



*Moore's Law, by Intel co-founder Gordon Moore, states that the number of transistors per square inch on an integrated circuit will double every year.

No comments:

Post a Comment