Pages

28 February, 2022

Meta announces advances in AI initiatives

Source: Meta live stream on Facebook. Mark Zuckerberg.
Source: Meta live stream on Facebook. Zuckerberg provides some updates on Meta's work on AI.


Mark Zuckerberg, founder and CEO of Meta, has announced his vision for how artificial intelligence (AI) will help us build metaverse use cases. According to Zuckerberg, AI will be needed to help us navigate both virtual and physical worlds, and innovations such as teaching AI to learn in context like humans do will be key to achieving this. 

Much of the research was initiated years ago, he noted. "We're building technology to help people feel closer in all kinds of new ways, that’s our DNA," he said. "We build new technology so you can interact with the people you care about."

At the Meta AI: Inside the Lab event, Zuckerberg demonstrated projects such as Builder Bot, which allows users to "create nuanced worlds to explore and share the experience with others with just your voice", whether it is having a picnic, adding islands, or playing tropical music on speakers.

He also introduced multimodal AI, which moves from the work on text such as predicting how a sentence can end, to images – predicting what an image is about given a scattering of pixels, as well as the concept of self-supervised learning (SSL), or just giving an AI raw data and letting it learn along the way, instead of through supervised learning, where it is trained with curated data in a formal database. 

AI systems that are built on SSL can typically manage very different tasks, as opposed to supervised learning-based systems, which specialise.

"SSL now outperforms many other methods for images and video," he said. "We think it’s going to be an important tool for the metaverse."

At the Meta AI: Inside the Lab event last week Meta announced:

Driving inclusion through the power of speech and translation

No Language Left Behind is working to create a single system capable of translating between all written languages, breaking down barriers for nearly half the world’s population whose languages are not available online. The aim is to support hundreds of languages.

Zuckerberg explained that many AI models are trained in English, which can add noise and imprecision to translations. This open source AI model translates many languages without having to go through English as an intermediary.

"We have the chance to improve the Internet and set a new standard on communicating with each other," he said.

Meta also aims to create a Universal Speech Translator, an AI system that provides instant speech-to-speech translation across all languages, even those that are mostly spoken. 

"The ability to communicate in any language, that’s a superpower that people have dreamed of for forever," Zuckerberg commented.

A next-generation AI model for chatting with virtual assistants

Project CAIRaoke is a new approach to conversational AI, the technology that powers chatbots and assistants. The work could one day enable people to enjoy more comprehensive assistance, have more natural conversations and interactions with their devices.

A new resource for understanding how AI systems work

This prototype AI system card tool outlines the many AI models that comprise an AI system and can help people better understand how these systems operate.

New ways to bring diverse talent into AI

The AI Learning Alliance is making coursework on machine learning topics open to everyone and creating a consortium of professors at universities with large populations of students from under-represented groups; these professors will teach the curriculum.

Open-sourcing high-performance AI for recommendations

TorchRec is Meta's library for building state-of-the-art recommendation systems for the open source PyTorch machine learning framework. These recommendation systems power personalisation across many of Meta's products.

No comments:

Post a Comment