Between UHDTVs, smartphones, tablets and a plethora of other screens/devices/services through which to consume media, the race to deliver content has become an uphill battle. Consumers increasingly demand a wider variety of content in progressively diverse delivery mediums, putting growing pressure on content owners and broadcasters to re-version, repackage and repurpose media. However, through optimal implementation of open technologies and IT best practice, broadcasters and content owners can not only respond to this demand but also add greater flexibility, efficiency and quality to their workflows and outputs.
Media is transcoded at a number of touch points in the production and distribution process, potentially degrading the source quality over iterations. The problem is that the average number of times content is encoded and decoded is higher than the design efficiency of most codecs commonly used by broadcasters today. The average number of transcodes from content origination to its eventual destination is rising to as many as twenty times.
These statistics reflect the complexity of the broadcast business today. Companies who shoot or produce content aren’t necessarily those who will aggregate it, and those who aggregate content are not always the same as those who create the various accompanying media assets (trailers, promos, etc.). At every step, the file will be encoded, decoded and re-encoded several times. Content destined for overseas distribution or incoming from foreign producers/broadcasters may have to undergo yet more transcode steps in preparation for final delivery.
The fact is, media takes a bit of a beating between acquisition and various outputs, resulting in a significant impact on the technical and subjective quality of the media that the end user eventually sees. But media processing is also CPU (or GPU) intensive, making the alternative quite expensive in terms of infrastructure.
To improve quality while reducing cost, we need to consider how to minimize the number of times media is processed and ensure that the media processing that has to be done is of the highest quality.
For example, creating packages and versions is far more efficient when you have a clear, standardized view of where all the “raw” components of the packages are and can “virtually” assemble and store the versions and packages as metadata, leaving the source media in it’s original state. In this case, we only re-encode the file at the point of delivery – employing LEAN or “just-in-time” methodology in media workflows.
This also serves to insulate operators from the complexities of media manipulation and processing, leaving them confident that those automated actions “just happen” and ensuring that all their interactions with media are about making creative choices and applying human judgment to business processes.
Knowing where media came from – tracking the structural and genealogical media metadata – is also critical in automating media processing (speaking of which, attend our next webinar on BPM!) and is a key part of a MAM-driven workflow. With new resolutions, frame rates and codecs constantly emerging – and an increase in crowd-sourced content driving the number and variety of devices used for acquisition – strong media awareness and understanding ensures that the “right” or, more-honestly (since any processing will degrade content), “least-worst” media-processing path can be chosen.
Overall, when it comes to delivering the highest of image quality, the explosion in acquisition formats makes the need for good asset management more important than ever, as it allows content owners to transparently manage that additional complexity.