Press "Enter" to skip to content

AI at the edge: 5 trends to watch

Edge AI offers opportunities for multiple applications. See what organizations are doing to incorporate it today and going forward.

Image: Who is Danny/Adobe Stock

AI at the edge continues to develop. AI applications on the edge are myriad: Autonomous vehicles, art, health care, personalized advertising and customer service could all make use of it. Ideally, edge architecture delivers low latency on account of being closer to the requests.

SEE: Don’t curb your enthusiasm: Trends and challenges in edge computing (TechRepublic) 

Astute Analytica predicts the edge AI market will grow from $1.4 million in 2021 to $8 million by 2027, a CAGR of 29.8%. They expect this growth will come in large part from AI for the Internet of Things, wearable consumer devices and a need for faster computing in 5G networks, among other factors. These bring both opportunity and reservation because edge AI’s real-time data is vulnerable to cyberattacks.

Take a look at five trends likely to shape the field of edge AI in the next year.

Top 5 edge AI trends

Separating AI from the cloud

One of today’s sea changes is the ability to run AI processing without a cloud connection. Arm recently released two new chip designs which can bring processing power to the edge for IoT devices, skipping either a remote server or the cloud. Their current Cortex-M processor can handle object recognition, with other abilities such as gesture or speech recognition coming into play with the addition of ARM’s Ethos-U55. Google’s Coral, a toolkit to build products with local AI, also promises hefty AI processing “offline.”

Machine learning ops

NVIDIA predicts that best practices in machine learning operations will prove a valuable business process for edge AI. It needs a new lifecycle for IT production — or, at least, that’s the speculation as MLOps develops. MLOps could help organize and push the flow of data to the edge. A continuous cycle of updates may prove effective as more organizations find out what works best for them when it comes to edge AI.

Data scientists working on designing algorithms, choosing the model architectures and deploying and monitoring ML on a day-to-day basis may benefit from simplified ML methods.

That means “it’s possible for neural nets to design neural nets,” said Google CEO Sundar Pichai.

Auto ML requires a lot of memory and computational power, so its deployment at the edge goes  hand-in-hand with other ongoing processing considerations.

Specialized chips

In order to do more processing on the edge, companies need custom chips to deliver sufficient power. Last year, startup DeepVision made headlines with a $35 million series B financing round for its video analytics and natural language processing chip for the edge.

“We expect 1.9 billion edge devices to ship with deep learning accelerators in 2025,” Linley Group principal analyst Linley Gwennap explained.

DeepVision’s AI accelerator chip pairs with a software suite that essentially transforms AI models into computation graphs. IBM released their first accelerator hardware in 2021, intended to battle against fraud.

New use cases and capabilities for computer vision

Computer vision remains one of the prominent uses for edge AI. NVIDIA’s partner network for its application framework and set of developer tools includes over 1,000 members today.

A major development in this area is multimodal AI, which pulls from multiple data sources to go beyond natural language understanding into analyzing poses and performing inspection and visualization. This could come in handy for AI which seamlessly interacts with people, such as shopping assistants.

Higher-order vision algorithms can now classify objects by using more granular features. Instead of recognizing a car, it can go deeper to pinpoint the make and model.

Training a model to recognize which granular features are unique to each object can be difficult. However, approaches such as feature representations with fine-grained information, segmentation to extract specific features, algorithms that normalize the pose of an object and multiple-layer convolutional neural networks are all current ways to enable this.

Enterprise use cases in their infancy include quality control, live supply chain tracking, identifying an interior location using a snapshot and detecting deep fakes.

Increased growth of AI on 5G

5G and beyond are almost here. Satellite networks and 6G wait on the horizon for telecommunications providers. For the rest of us, there’s still some time to transition between 4G core networks that work with some 5G services before jumping fully into the next generation.

Where does this touch edge AI? AI on 5G could lend greater performance and security to AI applications. It could provide some of that low latency edge AI requires, as well as opening up new applications such as factory automation, tolling and vehicle telemetry, and smart supply chain projects. Mavenir introduced edge AI with 5G for video analytics in November 2021.

There are more emerging trends in edge AI than we can fit on one list. In particular, its proliferation might require some change on the human side as well. NVIDIA predicts edge AI management will become a job for IT, likely using Kubernetes. Using IT resources instead of having the line of business manage edge solutions can optimize costs, Gartner reported.

Source: TechRepublic