How Edge Computing Makes Streaming Flawless
It’s nighttime. You put on your diving suit and enter the cage, a mixture of fear and excitement coursing through your veins. It’s your first night as a deep-sea explorer, and as the cage lowers itself into the cold, dark sea, you can’t help but notice something isn’t quite right. As the cage descends 100 meters, you turn on your helmet’s flashlight and rotate your camera up and down, left, and right to get a 360-degree view of your surroundings.
Just as your eyes adjust to the world beneath the waves, the one thing you hoped you wouldn’t see appears in front of you, a great white shark. It charges at full speed and rams into your cage without warning. You lose your balance. Your arm slips through the bars. The shark’s powerful jaws open wide and snap shut on your exposed forearm.
You scream. The cage is trembling violently, and the shark is still trying to clamp down on your arm. Miraculously, though, your real arm is fine, and the real you is far from that shark’s grasp. Everything is the same, except you’re not really in a cage in the water. You are in a 360-degree video game.
Welcome to the world of immersive, interactive media. These next-generation applications bring virtual experiences to life, and for that, advanced technical support is necessary. This is where edge computing makes a significant impact. 360 is a video format that, unlike traditional video, captures content all around you. It allows viewers to explore an environment from all angles – front, back, sideways, up, down and all around. So, you can explore an aspect of the environment that the videographer isn’t necessarily focusing on at that point in time.
Technological advancements in media-creation tools, smart devices and the widespread availability of high-speed internet have facilitated the development of immersive, interactive technologies such as 360-degree videos, interactive multiview streaming, virtual view generation and view switching. These applications are delay-intolerant and require real-time response to maintain users’ quality of experience (QoE). They are also bandwidth-hungry, resulting in escalating bandwidth costs and energy consumption.
The Edge Computing Advantage
With video accounting for over 80% of all internet traffic and live streaming accounting for 60% of downstream internet traffic, challenges to bandwidth utilization and escalating bandwidth and energy costs must be resolved.
Edge computing can help reduce latency and bandwidth costs by bringing processing and storage closer to users, which can result in better live-streaming performance.
Next, let’s get a high-level understanding of a possible live-streaming architecture.
Cameras and microphones send data to an encoder that publishes them to a streaming server (the origin server). Today, viewers fetch streams directly from the streaming (origin) server. This exposes streams to internet congestion, which can increase latency and response times while decreasing QoE.
With edge computing, streaming servers are positioned in locations around the world. They ingest streams from the origin server and connect users to the edge server closest to them. Leading cloud providers configure edge servers in last-mile data centers worldwide as part of their content delivery network (CDN) services. Content providers deliver streams to the edge servers closest to the user. Because data flows directly from the edge servers to the users, latency and response times can be reduced.
Let’s look at a few use cases of how edge computing helps with interactive media and video streaming.
This consists of multiple video streams simultaneously captured from multiple cameras from different viewpoints. They enable the viewer to watch a video or live event from different angles.
Multiview videos can be transcoded in two ways:
- As a single video, using multiview video coding (MVC), so the user can switch to any available view.
- Or with individual encoding of each stream to switch to a particular view or a subset of the available views in the video. This type of transcoding is called interactive multiview video streaming (IMVS).
With MVC, the user does not experience any switching delay because all views are encoded in a single video. This does increase the size of the videos, however, resulting in higher bandwidth usage for high-quality streaming. Resource-constrained smartphones and other battery-powered devices are ill-equipped to handle these requirements.
Interactive multiview streaming (IMVS) consumes less bandwidth since a limited number of individual streams are sent to the user.
In IMVS, each view is individually encoded and streamed, which is a distinct advantage over MVC. However, in the case of IMVS, the streams are fetched from a content provider’s origin server, which increases latency due to the greater number of hops between the end user and the origin server. This can be a deal-breaker in the case of live videos.
Moreover, if the origin has a higher bitrate than the network throughput available at the user end, quality will suffer, especially in the case of live video.
Providers can use edge technologies to help resolve issues with high bandwidth usage, switching delay and specific bitrate encoding.
Edge Computing and Media Streaming
Since the edge network is generally fewer hops away from the user, requested views can be streamed with minimal delay from edge servers. This can reduce latency compared to streaming directly from the content provider.
Furthermore, if the streaming content at the edge has a higher bitrate than the user can handle, due to limited bandwidth and CPU capacity, the edge server can immediately transcode that higher bitrate view to a lower one, improving the user’s quality of experience.
A single stream from the edge can also serve multiple users, reducing internet bandwidth costs associated with streaming to users from servers that may be multiple hops away.
Free-view videos are generated from MVC videos and allow users to change the view and direction interactively. Each view can be a real view captured by a camera or a virtual view generated from adjacent views and related information. Free-view streams can be generated for 2D and 3D videos and are generally done at the user end.
However, the increased bandwidth requirements, combined with the additional resources required to generate virtual views, can strain lightweight, battery-powered user devices. This can cause view generation delays, negatively affecting end-user QoE.
With edge computing, providers can offload virtual view generation to edge servers. Virtual view generation can also be made adaptive based on bandwidth and resources at the edge or the client side.
With the increasing number of live video streams over the internet and the performance benefits that edge computing brings to an interactive media and video streaming platform, jumping on the edge computing bandwagon can be a logical choice for content and CDN providers.