Edge Encoding for the Video Cloud
As the industry adapts to the ongoing growth in video streaming volumes and applications, cloud computing and video distribution companies need to reconsider the locations for expanding video encoding capabilities in their networks. Historically, video encoding was most often processed inside centralized data centers to consolidate operations and to achieve economies of scale. However, the costs associated with video distribution, along with the low latency encoding requirements for emerging interactive video applications, is motivating video distribution network architects to reevaluate their edge encoding options.
Video Encoding Essential for Streaming Services
Video streaming is usually delivered to end user devices using adaptive bitrate (ABR) technologies. For ABR to work, a set of video encoding ladders needs to be prepared from each high-bitrate source video input. As a general benchmark, an encoding ladder is typically a set of ~6 to 10 encodings of incremental sub-bitrates and resolutions of the exact same video content, resulting in a total bitrate for each encoding ladder set equal to ~2.5x the original high-bitrate source video input.
Alternative Locations for Encoding in the Video Cloud
Centralized DC: Video encoding requires a great deal of computing hardware or specialized encoder equipment, especially for live video streaming, making centralized data centers (DCs) the natural choice to build encoding capacity with economies of scale. Operators can consolidate encoding operations to just a few centralized locations, leading to the lowest encoding infrastructure investment, but not necessarily the lowest overall operating costs. Meanwhile, ABR streaming servers are typically hosted closer to the users in regional data centers or points of presence (POPs) to minimize latency and maximize quality of experience for a concentration of video consumers in each region. The operational challenge is that distributing full encoding ladders from centralized encoding farms to the regional locations requires 2.5x the bandwidth and ongoing networking cost, compared to distributing only the original high-bitrate source video to each regional location.
Regional DC: In today’s core networks, central and regional data centers are often connected with dedicated, high bandwidth point-to-point links which are expensive to operate – especially during peak video streaming periods. A regional encoding strategy would increase encoding processing CAPEX across a larger number of regional data center or POP locations, but the benefit is only the original high bitrate source video is distributed to each region. Total cost of ownership (TCO) analysis over a multi-month planning period would show that a regional encoding strategy, when both video distribution and encoding costs are considered, would be more economical.
Edge DC: Many emerging interactive cloud-based video applications, such as cloud gaming, AR, VR, or 360° video, will require very low latency 20ms feedback loops. Supporting these low latency requirements from centralized data centers will be difficult to achieve due to network propagation delay. Instead, edge data centers will be the preferred hosting locations of these next-generation applications and associated video encoding services. With the rollout of 5G services, mobile network operators are also deploying edge data centers close to 5G radio equipment to support the next generation of interactive or latency-sensitive mobile video applications and services.
Consider the low-latency processing required for a typical interactive video application, starting with a game controller or VR headset movement command sent into the cloud gaming engine and rendering for each player in edge gaming server, encoding the next video frame for each player, before sending the next frame back to each player’s video display. These applications will demand high video encoding performance requirements, with solutions expected to generate UHD resolution for gaming or VR headset displays with ultra-low encoding latency.
Technology Options for Video Encoding
Today, video encoding services are often deployed as software processes running on compute resources. Variations on this architecture include cloud-native software that can be deployed on virtualized or containerized instances in the cloud or enterprise data centers. While software encoding on CPU is your most flexible option, it is also likely your most expensive option requiring the most servers, rack space, power, and cooling, making this encoding strategy undesirable, if not impractical, for edge data center environments. Other options to improve encoding densities and reduce power consumption include software encoding solutions that run on graphic processor units (GPUs) or field programmable gate arrays (FPGAs). However, to achieve the density and low power targets of edge data center environments, application-specific integrated circuit (ASIC) encoding solutions are your best option.
See how you can seamlessly deliver content with CenturyLink’s secure high-performance network.
This blog is provided for informational purposes only and may require additional research and substantiation by the end user. In addition, the information is provided “as is” without any warranty or condition of any kind, either express or implied. Use of this information is at the end user’s own risk. CenturyLink does not warrant that the information will meet the end user’s requirements or that the implementation or usage of this information will result in the desired outcome of the end user.