Pages

Saturday, 19 May 2018

Microsoft adds to AI and edge development capabilities on Azure

At Microsoft Build 2018, Microsoft’s annual developer conference that was held earlier this month, Microsoft leaders showcased new technologies to help every developer be an artificial intelligence (AI) developer, on Microsoft Azure, Microsoft 365 and across any platform.

“The era of the intelligent cloud and intelligent edge is upon us,” said Satya Nadella, CEO, Microsoft. “These advancements create incredible developer opportunity and also come with a responsibility to ensure the technology we build is trusted and benefits all.”

As part of Microsoft’s commitment to trusted, responsible AI products and practices, the company announced AI for Accessibility, a US$25 million, five-year programme aimed at harnessing the power of AI to amplify human capabilities for more than 1 billion people around the world with disabilities. The program comprises grants, technology investments and expertise, and will also incorporate AI for Accessibility innovations into Microsoft Cloud services. It builds on the success of the similar AI for Earth initiative.

Microsoft further announced new capabilities for developers to extend to the edge:

Microsoft is open sourcing the Azure IoT Edge Runtime, allowing customers to modify, debug and have more transparency and control for edge applications.

Custom Vision will now run on Azure IoT Edge, enabling devices such as drones and industrial equipment to take critical action quickly without requiring cloud connectivity. This is the first Azure cognitive service to support edge deployment, with more coming to Azure IoT Edge over the next several months.

DJI, the world’s biggest drone company, is partnering with Microsoft to create a new software development kit (SDK) for Windows 10 PCs, and it has also selected Azure as its preferred cloud provider to further its commercial drone and software-as-a-service (SaaS) solutions. The SDK will bring full flight control and real-time data transfer capabilities to nearly 700 million Windows 10 connected devices globally.

As part of the commercial partnership, DJI and Microsoft will co-develop solutions leveraging Azure IoT Edge and Microsoft’s AI services to enable new scenarios across agriculture, construction, public safety and more.

Microsoft announced a joint effort with Qualcomm Technologies, to create a vision AI developer kit running Azure IoT Edge. This solution makes available the key hardware and software required to develop camera-based IoT solutions.

Developers can create solutions that use Azure Machine Learning services and take advantage of the hardware acceleration available via the Qualcomm Vision Intelligence Platform and Qualcomm AI Engine. The camera can also power advanced Azure services, such as machine learning, stream analytics and cognitive services, that can be downloaded from the cloud to run locally on the edge.

Microsoft additionally announced Project Kinect for Azure, a package of sensors, including its new depth camera, with onboard compute designed for AI on the edge. Building on Kinect’s legacy that has lived on through HoloLens, Project Kinect for Azure empowers new scenarios for developers working with ambient intelligence.

Combining Microsoft’s Time of Flight sensor with additional sensors all in a small, power-efficient form factor, Project Kinect for Azure will leverage the richness of Azure AI to dramatically improve insights and operations. It can input fully articulated hand tracking and high-fidelity spatial mapping, enabling a new level of precision solutions.

A Speech Devices SDK was also announced, delivering superior audio processing from multichannel sources for more accurate speech recognition, including noise cancellation, far-field voice and more. With this technology, developers can build a variety of voice-enabled scenarios like drive-through ordering systems, in-car or in-home assistants, smart speakers, and other digital assistants.

Azure Cosmos DB updates include new and differentiated multimaster at global scale capabilities, designed to support both the cloud and the edge, along with the VNET general availability for increased security. With these developments, the Cosmos DB database service can deliver greater cost-effectiveness and global scale.

A preview of Project Brainwave, an architecture for deep neural net processing, is now available on Azure and on the edge. Project Brainwave makes Azure the fastest cloud to run real-time AI and is now fully integrated with Azure Machine Learning. It also supports Intel FPGA hardware and ResNet50-based neural networks.

New Azure Cognitive Services updates include a unified speech service with improved speech recognition and text-to-speech, including support for customised voice models and translation. Along with Custom Vision, these updates make it easier for any developer to add intelligence to their applications.

Microsoft is making Azure the best place to develop conversational AI experiences integrated with any agent. New updates to Bot Framework and Cognitive Services will enable the next generation of conversational bots to work with richer dialogues, plus full personality and voice customisation to match the company’s brand identity.

A preview of Azure Search with Cognitive Services integration is now available. This new feature combines AI with indexing technologies so it is possible to quickly find information and insights, whether via text or images.

Microsoft also demonstrated mixed-reality capabilities to enable richer experiences that understand the context surrounding people, the things they use, their activities and relationships:

In addition to Project Kinect for Azure, Microsoft introduced Microsoft Remote Assist, where customers can collaborate remotely with heads-up, hands-free video calling, image sharing, and mixed-reality annotations. For example, field workers can share what they see with any expert on Microsoft Teams, while staying hands on to solve problems and complete tasks together, faster.

With Microsoft Layout, customers can design spaces in context with mixed reality. One scenario is importing 3D models to create room layouts in real-world scale, experiencing designs as high-quality holograms in physical space or in virtual reality, and sharing and editing with stakeholders in real time.

With Azure Kubernetes Service (AKS), developers can drastically simplify how they build and run container-based solutions without deep Kubernetes experience. Generally available in the coming weeks, AKS integrates with developer tools and workspaces, DevOps capabilities, networking, monitoring tools, and more in the Azure portal, so developers can write code, not stitch services together. In addition, Microsoft is now offering Kubernetes support for Azure IoT Edge devices.

Visual Studio IntelliCode is a new capability that enhances everyday software development with the power of AI. IntelliCode provides intelligent suggestions to improve code quality and productivity and is available in preview today in Visual Studio.

Visual Studio Live Share, also in preview, lets developers easily and securely collaborate in real time with team members who can edit and debug directly from their existing tools like Visual Studio 2017 and VS Code. Developers can use Live Share with any language for any scenario, including serverless, cloud-native and IoT development.

At Build Microsoft also announced a partnership with GitHub that brings the power of Azure DevOps services to GitHub customers. An integration of Visual Studio App Center and GitHub was released, allowing GitHub developers building apps for iOS and Android devices to seamlessly automate DevOps processes right from within the GitHub experience.

For Blockchain developers the new Microsoft Azure Blockchain Workbench will stitch together an Azure-supported Blockchain network with cloud services like Azure Active Directory, Key Vault and SQL Database, reducing proof-of-concept development time significantly.

No comments:

Post a Comment