Press "Enter" to skip to content

Microsoft Azure: This new developer kit helps to bust the myth that AI is hard

Microsoft’s new Azure Percept Developer Kit aims to make computer vision cheaper and easier, bringing AI to more businesses.

The Azure Percept Development Kit (DK) comprises vision (right), optional audio (left) and developer board (middle) modules.” data-credit=”Image: Microsoft”>
The Azure Percept Development Kit (DK) comprises vision (right), optional audio (left) and developer board (middle) modules.

Image: Microsoft

More and more sensors are being added to the edge of networks, using tools like Azure IoT Hub to connect them to cloud services, where maximum utility can be extracted from the data they generate. But too many of those devices are custom, requiring significant development to get that data in the right format, in the right place. 

More about artificial intelligence

Exploiting this growing industrial IoT to best effect requires software and hardware engineers to work with device firmware, learn new real-time operating systems, and think about security at a very low level. It’s a complex field, where it can be hard to get benefits unless you have the resources to set up a dedicated development team. 

SEE: Natural language processing: A cheat sheet (TechRepublic)

But what if the industrial IoT was truly industrial, built using standards, with the ability to fit together like rugged Lego bricks? And what if there was a common development environment that helped you connect APIs and services to build your own machine-learning powered IoT applications? 

Unveiling the Azure Percept DK 

That’s why Microsoft recently launched Azure Percept. Like Microsoft’s other IoT services, it mixes hardware, software, and the Azure cloud. At the heart of the platform is a set of reference designs for edge hardware that takes advantage of machine learning. Hardware developers will be able to take those designs and build their own devices, while adding their own features — using a custom camera module or changing the radios, for example. Designs could also be tailored for different industries, with different systems in warehouses or on oil rigs. Azure Percept is intended to be a family of plug-and-play IoT hardware from multiple vendors, where different designs use the same software platform. 

While having reference designs is one part of the story, edge hardware needs software. So Microsoft is shipping an initial developer kit to kickstart the Percept ecosystem. Available from the Microsoft Store, it consists of a hub and a camera, with an optional audio sensor. The basic developer kit costs $349, with the audio sensor an extra $79. They’re designed to fit the standard 80/20 mounting rails found in many industrial facilities, so can be fitted to existing rails or quickly fitted in any space. 

The main Percept DK module is built around NXP’s iMX8M system-on-module processor board, with 4GB of RAM and 16GB of storage and a TPM for security. As well as four Arm 64-bit cores, it has additional acceleration for machine-learning workloads with an Intel Movidius Myriad dedicated vision neural network inference processor. This allows it to offload much of Percept’s on-board image processing, saving both CPU time and power. 

Connectivity comes via Ethernet, Wi-Fi or Bluetooth. It uses Microsoft’s own CBL-Mariner Linux distribution, with management and update services from Azure. The camera module connects to the main carrier board over USB-C, and Microsoft suggests that you can go from opening the box to getting images in under 10 minutes. 

Getting started with Percept 

You don’t need an 80/20 rack to get started, as the devices can be set up next to your development PCs, so you can quickly see how they work. All you need to do is plug in power, attach the antennae, and then connect the camera unit via USB. Once it’s powered up, you can start initial configuration over Wi-Fi. A set of web pages guide you through connecting to a Wi-Fi network, before configuring SSH. Once it’s ready, it connects to Azure where it needs to be registered in your account, linking the Percept DK to an Azure IoT Hub (either creating a new instance or joining an existing one). You have to use a Standard tier instance, as Percept is not supported on Free or Basic instances. 

When the Percept DK connects to Azure for the first time, it will update and download its default software modules. You can then use Azure’s Percept Studio management and development environment to work with the hardware, initially testing streamed video from an AI vision recognition model that’s built into Percept. 

SEE: Office 365: A guide for tech and business leaders (free PDF) (TechRepublic)

Getting started quickly is a definite benefit, as you can show results fast. To help go beyond the basic recogniser there are sample vision models that are based on common business problems. You can quickly deploy tools for detecting people or identifying empty shelves, for example, without writing a line of code. 

That low- and no-code approach to practical AI vision is key to Percept; what’s important here is what you can do with machine learning and computer vision (and audio) in your business. Once you’ve connected your Percept system to Azure IoT Hub, you can use the Azure-hosted Percept Studio development tools to build your own applications, connecting together various APIs and delivering code modules to your devices. 

Azure Percept Studio has a number of sample AI models, such as these for computer vision.” data-credit=”Image: Microsoft”>
Azure Percept Studio has a number of sample AI models, such as these for computer vision.
Image: Microsoft

Building your first Percept applications 

Starting with Percept Studio is like working with any Azure tool in that everything you create needs to be assigned to a resource group and assigned to a pricing tier — in this case for Azure’s Cognitive Services, which provides the machine-learning APIs used by Percept. Once you’ve done the basic resource setup, you can quickly configure a vision solution. Start by choosing whether you’re detecting or classifying objects. You don’t need to choose a target device type, as that’s automatically handled for you by Percept Studio. 

Next, start to train your model, with at least 30 images captured from the Percept camera module stream. You can automate this process — if you’re building an app that’s intended to monitor a space, for example. Once images have been captured and uploaded to Percept Studio, you can then start to tag them. Labels are essential for machine learning, as they allow you to tag elements of an image, ensuring that your app can identify specific objects or occurrences. Manually tagging a set of images and then running them through a training cycle is probably the most time-consuming part of building a basic ML application. 

Percept Studio provides tools for testing a model and retraining where necessary. Don’t expect to get things right first time; you can make your model better with the more examples it has to work with. Once you’re happy with the results, you can deploy your model to your Percept devices and start it running. 

Percept is capable of a lot, as it’s built on top of Azure’s custom vision tools, which are part of its Cognitive Services machine-learning suite. There’s an additional pack of development tools that can be downloaded to help build more complex solutions, with a GitHub repository to help you get started. This gives you access to the software used to run the AI modules, as well as tools to help you train and deploy your own neural networks. 

Microsoft is attempting something quite ambitious with Percept: providing a reference design for AI sensor hardware and the tools to build applications around it. There’s a myth that AI is hard, and it’s clear that this is one myth that the Percept team aims to help you bust. No-code solutions get you started quickly, ready to deploy on relatively cheap hardware, while you can build more complex, custom neural networks on your own hardware. It’s an effective mix that should grow with you, as you gain experience with both computer vision and audio processing. 

Also see

Source: TechRepublic