Network Effects: In 2019 IoT And 5G Will Push AI To The Very Edge
Almost 30 years ago, when the internet was launched onto an unsuspecting world, even inventor Tim Berners-Lee and colleagues at CERN could not have predicted the upheaval that would follow. It has been the greatest technology revolution since the industrial original. And now it’s happening all over again. The combination of Cloud, IoT, and artificial intelligence (AI) is driving opportunity and threat in equal measure. Decisions made within organizations will have an impact for years to come. The way in which IoT edge links to the broader cloud backend, and the way in which AI integrates across the full processing chain, will be the key to unlocking material innovation and value.
After many years of rationalization and stretched infrastructure investments, the IoT represents a tipping point for telcos, the cellular networks on whose backbones the new IoT offerings will be delivered. For some time now, innovation in the sector has been driven by leading web services and platform providers, by global Over The Top (OTT) players who offer communications and streaming services across IP networks, and by the more innovative hardware and application developers. Broadly speaking, telcos have not yet monetized the data services traversing their networks to anything like the extent they would have hoped. Playing the link role between platforms and IoT devices has the potential to change all that if they execute quickly and intelligently. And for many, that will mean moving past traditional bureaucracy and behaving more like the OTT players themselves.
With the emergence of 5G, telcos can embed networking across the entire processing chain, delivering the fusion of cloud and edge. This will require collaboration with back-end storage and analytics platforms and with edge silicon and edge hardware developers. If successful, the networks will position themselves as a center of innovation. IoT devices tend not to be churned for shinier models. And if those IoT devices deliver services architected by the networks, linking to cloud analytics and storage platforms, then the status quo, so skewed in favor of Apple and Google and others might re-balance.
Cloud vs. edge
It doesn’t seem so long ago that the cloud seemed innovative. Now the Cisco Global Cloud Index predicts cloud processing will reach 94% market share by 2021, leaving traditional data centers far behind. Cloud IP traffic is set to grow at a CAGR of 27% from 2016 through 2021. Overall, Gartner forecasts that cloud will account for 28% of enterprise IT spending in primary segments by 2023, up from 19% this year, reaching some $1.3 trillion. Microsoft’s Azure business unit, second only to market leader AWS, has now posted twelve straight quarters of revenue growth, usually 90% or better, albeit “only” 76% in its most recent period.
As fast as cloud computing and cloud-based AI has centralized and virtualized, IoT is now set to distribute and fragment as it races to network billions of physical devices to make life easier and more automated. IHS Markit forecasts 125 billion devices by 2030, up from 27 billion last year. And Intel predicts the value of IoT technology to be as much as $6 trillion by 2025. It is this impact on the industry that Accenture estimates could add $14 trillion to the global economy by 2030. They describe the manufacturing and industrial processes share of IoT, the Industrial Internet of Things, as “the biggest driver of productivity and growth in the next decade … accelerating the reinvention of sectors that account for almost two-thirds of world output.” IoT in all its guises will drive the annual growth in data transmission from 25% to 50%. It will also shift processing from the cloud to the edge. There is too much data, it’s too indiscriminate, too centralized, and it takes too long to access. According to Accenture’s 2018 Technology Vision Report, this “internet of thinking” extends intelligence from cloud to edge. “To fully enable real-time intelligence, businesses must shift event-driven analysis and decision processing closer to points of interaction and data generation. Delivering intelligence in the physical world means moving closer to the edge of networks.”
Above all, though, the cloud vs. edge debate will be shaped by the imperatives of AI. The shift to cloud computing broadened the reach of big data, but its real legacy is the first phase of AI: smarter search, machine learning, natural language processing. Professor Stephen Hawking said of AI that “every aspect of our lives will be transformed. In short, success in creating AI could be the biggest event in the history of our civilization.” There is no doubt that AI will have a vast economic impact on the world. According to Accenture: “AI could double [baseline] annual economic growth rates [by] 2035, changing the nature of work and creating a new relationship between man and machine.”
So, there is clearly an emerging shift from cloud to edge; but think of this as a redistribution rather than a rethink. The AI-driven Cloud-Edge architecture is finding its balance. No device or application at the edge will ever be able to compete with the cloud’s capacity for data storage and processing. But AI feeds on data. Ever more data means ever more latency. And it is latency and connection resilience as much as bandwidth usage that is driving the shift to edge processing. “Computing will become an increasingly movable feast,” suggested an Economist article earlier this year. “Processing will occur wherever it is best placed for any given application.” For AI to thrive it needs both the cloud and intelligent edge computing in the form of smart IoT devices. And this will drive the fusion of cloud and edge, rather than reliance on one over the other.
Cloud vs. edge becomes the fusion of cloud and edge
If the fusion of cloud and edge IoT is the solution, what happens when the complexity of real-time analysis is beyond the limited processing power of an edge IoT device, but there is too much data to upload to the cloud? How can networks of sensors work together if the data being analyzed cannot be instantly shared as required? How can people make real-time decisions without access to the actual data on which machine inferences have been drawn? And what about the inevitable variation in edge capability by type of device? These, in a nutshell, are the real-world challenges of applying AI to the analysis of live video with low latency. And video is the defining example with which to explore the architecture of cloud to edge AI.
Video analytics underpins many of the headline applications of AI. Autonomous vehicles, robotics, smart cities, security, defense. Video, already 90% of traffic through (mostly) downlinked entertainment, will seriously challenge network performance as cameras uplink content over cellular connections. This is particularly true of security with real-time video surveillance. Here, the challenge is maintaining high resolution with low latency. High resolution for analytics. Low latency for camera controls and live situational awareness. Security and surveillance video traffic will see a seven-fold increase over the next four years, fuelled by IoT cameras, the shift to cloud video platforms and the rapid growth in AI analytics, including facial recognition, object classification, and behavioral analytics.
In addition to bandwidth constraints, there is (actually) a limit to cloud storage. The video produced in a single year by the world’s currently deployed CCTV cameras, most of which are SD, exceeds the planet’s current data center storage capacity. We couldn’t stream and store all that data even if we wanted to. Not only does analytics provide immediate intelligence, but it also filters the video that needs to be stored from the material that can be discarded.
The majority of the billions of IoT devices will be wireless, and a significant number of these will be mobile: biometric and environmental sensors, vehicles, drones, wearables, smart devices. Only cellular networks provide the reach, resilience, scale, and security to connect these devices and the networks are having a major overhaul. Just as NB-IoT provides major improvements in battery life and range, 5G delivers lower latency and higher bandwidth. Both deliver massive scale in terms of numbers of devices. But wireless bandwidth will always be a finite resource; only so many video-generating security cameras can be streamed in parallel with the millions of video-consuming smartphones. Good architecture and good software need to factor in wireless connectivity, which in the real world can’t always be guaranteed.
Edge-AI video analytics
The main complexity with video hits when transmitting the video itself, rather than the metadata or the inferences. If the video is analyzed on the device then it’s manageable. But if the video needs to be networked as part of a real-time decision tree then it’s different. The urban autonomous car and the urban battlefield might use the same core video analytics but under very different conditions.
By its nature, an autonomous car is a hive of connectivity, plugged into a cloud architecture of information and data processing: actual driving conditions; behaviors and learnings to be assessed and improved; mechanical health checks for predictive maintenance; journey routes and times. The largely, but not exclusively, video-based processing of the road and its surroundings, including traffic, pedestrians and fixed objects, has to be processed in the car itself. 5G brings advances in latency, bandwidth, and capacity that will enable many new applications, including automotive, but real-time, safety-critical applications will always need to be performed locally. An autonomous vehicle has an absolute imperative to complete its defined journey safely and efficiently. Any further processing of video and other data performed locally or in the cloud is of secondary importance.
On the military front, Battlefield 2.0 is very different. Immediate frontline decisions will win or lose ground in an arena of shifting and uncertain parameters. But one thing that is certain is that modern warfare will be increasingly networked and distributed. And battlefields, urban or remote, foreign or domestic, are not known for reliable connectivity. They are at the mercy of disrupted infrastructure, physical and cyber attack and subject to massive data overload. The Internet of Battlefield Things (IoBT) envisages unlimited numbers of machines capturing unlimited amounts of data, with AI taking lower-level decisions and filtering intelligence for wider cloud processing or higher-level human decision making. By volume, most of the data being transmitted will be video: overcoming bandwidth constraints and latency becomes a prerequisite in making IoBT work.
And on the urban front, advanced safe city surveillance sits somewhere in between. A great deal of steady-state analysis can remain at the edge, but there is the need, certainly with more advanced systems, to network actual video for further analysis and, of course, human review and higher-level decision making. In real-time response to events, in law enforcement, in public safety, people in control rooms and the cloud processing that supports them needs real-time access to key visual data. Dedicated edge processing can be overly simplified or fixed. As a system learns and develops, its outlying nodes should do the same. This also unlocks material value from clustering.
Intelligently distributed architecture
To deliver the full cloud-edge architecture for live video, there have to be limits on the amount of data that needs to move at low latency in real time. Systems at the edge should sit within a distributed architecture that can operate in either an online or offline mode. A distributed architecture is not simply a structure, it needs to be real-world proofed and adaptable. Data needs to be focused and sized for live transmission and processing. Edge and central systems need to work in lock-step. Available bandwidth needs managing efficiently.
One example of a distributed architecture is AI-based human detection in edge surveillance analytics. A material percentage of all security and surveillance-related analytics would benefit from an uptick in human detection. Is there someone in the scene, hiding, running, crawling, climbing, walking? Is there a false alert caused by something else, or even environmental interference? Such systems can be deployed and then iteratively improved at the edge. But as analytics advances, the ‘so what’ will be asked more and more often. Do we know the person or persons? What are they doing? Are there any markers for bad behavior? Any anomalies? Some of this requires linkage to a wider system. Make the scene a crowded public space and any analytics becomes more complex. A processing chain, edge to center, is required to manage the workload. And then there’s clustering: where a system of multiple sensors is more capable than the sum of its parts. And this means intelligent connectivity.
For intelligent edge devices to leverage the power and scale of big data, live video requires intelligent connectivity, real-time, between and across networks. This is the real vision for Edge-AI. Not the rush to equip edge devices and sensors with new generations of AI silicon and dedicated GPUs with little thought to future evolution. Distributed processing, designed to balance efficient edge applications with a higher-capacity center, built to deliver low latency, enabling split-second decision making on real-time data. Many IoT video devices will also be mobile, with additional stress on video codecs given the frame to frame scene change from a moving sensor. 5G is imminent but not a panacea for the sheer scale of the networking challenge. Quality of service and universality of an offering will be front of mind. Solutions need to tolerate issues with congestion and coverage. Networking between cloud and edge has to be designed into the architecture of solutions.
Every cloud …
We are now at the very earliest stages of the development of IoT-cloud architectures that underpin AI applications. Most of these won’t touch video, and where they do it will be cut back, analyzed and processed. However, in applications that are real-time and unpredictable, and this includes security, defense, and public safety, there will be a need for adaptations of the core distributed architecture.
There’s a lot written about the realistic level of autonomy we can expect from AI in the next generation. What is certain is that the art of the possible relies on intelligent connectivity. Edge devices. Edge analytics. The cloud. Central applications and analytics. This is the architecture for the new wave of IoT business applications. Consequently, the telcos occupy a unique position in this ecosystem, which will rely on the quality and resilience of cellular networks at the same time as it respects their limitations.
Now it’s all down to execution. The sheer scale of the prize in IoT and AI will sift winners from losers at an unprecedented scale, and there will inevitably be rationalization and consolidation. But for those that come out on top, there’s the potential to put in place sticky revenue models that will persist for a generation. If the question is edge or cloud, the answer is yes. The immediate next question, though, is how.
Hand-picked content for you: The Fourth Industrial Revolution Is Here. Are You Ready?
This blog is provided for informational purposes only and may require additional research and substantiation by the end user. In addition, the information is provided “as is” without any warranty or condition of any kind, either express or implied. Use of this information is at the end user’s own risk. CenturyLink does not warrant that the information will meet the end user’s requirements or that the implementation or usage of this information will result in the desired outcome of the end user.