Jérôme Vieron, PhD – Director of Research & Innovation for ATEME
Global viewing of streaming OTT video content more than doubled last year and industry commentators predict continued accelerated growth across 2018. However, with this kind of success comes increased competition as more and more businesses look to take advantage of the buoyant market.
The number one differentiator will always be the content provided. Whereas before we learnt to love the only content on offer, now we can pick and choose. As a result, the competition to provide the most compelling movies and box-sets is fierce. However, there is a further game-changer too as we also have high expectations of the quality of service provided and these have been raised even further due to various new formats including 4K/UHD, HDR and HFR.
With this in mind, it’s not surprising that operators and technology vendors alike are focusing on ways to further enhance content delivery. As a result, this demand is powering a continuous string of innovations, especially around the issue of streaming.
The most recent of these developments embraces artificial intelligence (AI). Once the subject of science fiction, AI is now being used by global tech giants such as Amazon and Google to predict the behaviours of its users, with health organisations like the NHS also looking into the technology to help alleviate pressure from its doctors and nurses.
The broadcast industry is also seeing a more widespread adoption of AI. Now it is being used to analyse thousands of assets as part of the streaming process. In doing this, AI has been shown to save operators around 30% of content delivery costs, while also improving the quality of this delivery.
Most operators now find that traditional streaming can result in buffering and other delays. Research by Conviva shows that while watching a half-hour show, the average viewer spends less than 18 seconds waiting for a video to re-buffer, however, even this short time is too long when consumer expectations are high and the market so competitive.
The current solution within the industry looking to address this is adaptive streaming and its successor, content adaptive streaming. Adaptive streaming works by detecting a user’s bandwidth and CPU capacity in real-time and adjusting the quality of a video stream accordingly. Although the former is widely used, it does mean that for half the content the bitrate will be too high, and for the other half it will be too low. If it’s too high the content may stall and means that the content is never fully optimised.
As a result, industry pioneers such as Netflix have been working on remedying this shortfall. Netflix has been leading the way with per-title encoding and even recently announced per-shot encoding, but these are proprietary technologies and not available to other operators.
Recognising this shortfall, other developers have been working on content that adjusts the bitrates based on the complexity of content rather than just the internet connection. The result is content adaptive streaming which uses AI to compute all the necessary information, such as motion estimation, to make intelligent allocation decisions. Using a variable bitrate to reach constant quality allows bits to be saved when the complexity drops on slow scenes – using fewer profiles on easier content.
The traditional approach is to keep chunks at fixed lengths. The ecosystem usually requires chunks to start with an I-frame so that profile switches can occur between chunks, but with fixed-size chunks this implies arbitrary I-frame placement. Therefore, a scene cut before a chunking point results in a major compression inefficiency as the image is encoded twice.
Content adaptive streaming combines a scene cut detection algorithm in the video encoder with rules to keep chunk size reasonable and minimise drift, in order to prepare the asset for more efficient packaging. This not only brings cost saving benefits due to reduced traffic, storage and other overheads, but also improves the quality of experience for the consumer.
Content adaptive streaming solutions have been developed with interoperability in mind, so individual parameters such as dynamic chunking can be turned on and off. Operators also have the option to use the specific resolutions they want, even if these appear to be suboptimal to the system.
Any development that enhances quality while at the same time cuts overheads demands to be investigated further. It represents a win for the operator and a win for the viewer too – which all suggests that content adaptive streaming could be the future method of choice.