Faroudja’s Bit Rate Reducer Gambit

There's no end in sight for the consumer's insatiable appetite for bandwidth. That demand continues to stress transmission networks and storage systems throughout the world.

But what if someone said there's a way to significantly reduce the bit rate requirements of video content -- other than current compression schemes -- without damaging the image quality?

That would make people sit up and listen -- especially when that someone is Yves Faroudja, a legend in the video industry. Faroudja has been behind almost every video processing and scaling technology development for decades and is a recipient of three Emmy technology awards.

So, of course, everyone in Las Vegas at this year's NAB (National Association of Broadcasters) Show listened when Faroudja Enterprises, a privately funded startup, trotted out its new technology, a video bit rate reducer designed to provide up to 50 percent reduction in video bit rates without reduction in image quality.



In a hotel suite at the NAB show in Las Vegas, Faroudja Enterprises showed off
its new technology, a video bit rate reducer.


The Faroudja's scheme doesn't alter current compression standards (MPEG-2, MPEG -4, HEVC). It's rooted in Faroudja's belief that such compression systems aren't using all the available redundancies to improve compression efficiency.

Under the new scheme, Faroudja introduces a new pre-processor (prior to compression) and post-processor (after compression decoding). "We take an image and simplify it; and that simplified image goes through the regular [standards-based] compression process," he explained. "But we never throw away information."








Instead, in parallel with the conventional compression path, Faroudja inserts what he calls a "support layer." This compresses signals not used in Faroudja's so-called simplified image. Together with the decompressed simplified image, the support layer helps reconstruct the original image in full resolution -- at a reduced bit rate.



System overview shows an example of how Faroudja Enterprises' process
works when applied to 4K video.


Faroudja claims "a bit rate reduction of 35% to 50% for an equivalent image quality [and] a significant improvement of the image quality on low bit rate content."

Details, of course, remain tightly wrapped in the black box. Faroudja hasn't decided on the initial market focus for his new technology. He said he's keeping his options open for a business model -- for now. Faroudja's team, however, has completed proof-of-concept and done a public demonstration.

Potential markets for the new technology include broadcast, cable and satellite, streaming media over the Internet, or Skype-like video conferencing.

Faroudja Enterprises is about to embark upon "market implementation" of the technology "over the next couple of months" by tailoring it to market demands, Faroujda told EE Times in an interview this week.

Jon Peddie, president of Jon Peddie Research, said that there is no one else who has rolled out anything similar to what Faroudja has done. "Faroudja is not displacing any company or technology. They are augmenting the existing codecs, including H.265," said Peddie.

But when it comes to its potential competitors, there are many. Richard Doherty, research director at Envisioneering Group, says future competitors are "universities with strong analytical math departments," including Cambridge, Oxford, Harvard, MIT, Columbia, Stanford, Princeton, Georgia Tech, U.C. Berkeley, Purdue, and the Fraunhofer institute.

Doherty predicts, "Once this is known to work, other low-key efforts might get ramped up." They'd be all racing to "receive funding to uncover other methods which do not infringe on Yves's.


Market Acceptance?
Faroudja's video bit rate reducer has many advantages. It's compatible with existing standards. The new process is compression-standard agnostic. It applies to all existing standards, from MPEG-2 to HEVC, and reduces bit rate in all cases, the company claims.

Faroudja explained that a final image will be preserved even if just the pre-processor is used, without the post-processor. The image survives even if the support layer is interrupted. Images with two levels of resolution can be made available at the same time -- for example, simultaneous availability of a program in both 1080 lines and 4K resolutions, according to the firm.

Moreover, Faroudja's support layer "can further be configured as a transcoder" to help convert video among existing compression formats, such as MPEG-2, H.264, etc., to and from Faroudja's support layer format "to save bandwidth, bit rate, or file size in the cloud without sacrificing image quality."


The Industry Reaction
After the NAB show, Faroudja reported that his company's technology demonstration in a Las Vegas hotel suite produced "a lot of very positive feedback" from audience members in the high-end video business, including broadcasters and cable and satellite operators.

However, still missing are "responses from streaming Internet companies and those with Skype-like teleconferencing technologies. We will know more [about the target market] after we attend the Streaming Media East conference in New York in May."

Asked about potential business and technology hurdles, Envisioneering's Doherty said, "If [the technology is] as simple as Yves suggests, not many. It would be a slam dunk for satellite, wireless, fiber, cable, anything." Asked who would embrace this first, Doherty predicted that bandwidth constrained back-hauls -- broadcast, cable, satellite, and cellular -- will benefit most.


Highly Dependent on Content
Tom McMahon at Del Rey Consultancy, a video expert, pointed out that the industry still needs to see how it works in a variety of scenarios -- fast motion video, noisy images, video in lower resolution, video streamed at a lower bit rate -- and how much gain it could achieve.

He noted that "a bunch of experts" discussed similar layered approaches a decade ago when the industry was still developing the H.264 standard. Dolby Laboratories worked on a similar scheme, he said, but the technology was never implemented. "The issue [of such an approach] is that it is highly dependent on content." However, he quickly added that service providers intent on bandwidth conservation and delivery of better quality video at lower bit rates are hungry for a technology such as Faroudja's. It's also applicable in smart encoder designs.

Satellite operators such as DirecTV, operating in a controlled environment, might be interested, said McMahon. Even next-generation set-tops or TVs/media boxes from Google or Apple could take advantage of meta-data offered in a support layer such as Faroudja's, he added.

In the end, market acceptance will depend on how much investment is needed for Faroudja's system (pre- and post-processing, and the inclusion of a support layer).

Jon Peddie pointed out that one of the initial hurdles in garnering market acceptance is "getting people to 'grok' it." He called it a "double-edged sword," because it is "black magic until Yves goes into detail." That's something Faroudja "doesn't want to do with anyone who isn't willing to commit to it. Anyone selling IP has to walk that balancing act: disclosure vs. trust/risk."

Peddie predicts that telecoms wanting to compete with cable and satellite should be early to embrace Faroudja's video bit rate reducer.


Complexity
Faroudja explained that the processing power necessary for Faroudja Enterprises' process is roughly a "doubling of H.264." Most of the complexity, he noted, is in the pre-processor side.

In the technology demonstration, Faroudja used a multiple-unit rack system including off-the-shelf GPU boards. "We probably used 10 to 15 percent of the hardware [processing power] we had."

Asked if the technology is applicable to the consumer market -- including TVs, mobile phones, tablets -- he said it will be in the future, if it's licensed to chip vendors. However, he added that the process's present form isn't yet consumer-ready.

Faroudja hasn't exactly decided on the company's business model. Options he's considering include making the technology available in hardware, software, or through licensing. He said his company can do licensing agreements with system vendors, license IPs to silicon, or make its software downloadable on customers' hardware.

Faroudja Enterprises, which has developed its latest technology in complete secrecy over the last year, already has two patents granted and six more pending.

Faroudja's reputation as a veteran inventor over age 65 helped grease the skids for the patents. Faroudja was surprised that his patent applications were entitled to accelerated examination. Briefly assuming the aspect of an absent-minded professor, he said, "I didn't know that."

By Junko Yoshida, EE Times

HEVC Demonstrates its High Efficiency

HEVC Version 1 demonstrates it has achieved more than 50% bitrate savings compared to MPEG 4 AVC/H.264, the MPEG standards group announced following its 108th meeting held at the beginning of April in Valencia, Spain.

In a verification test campaign of the HEVC, ITU-T Rec H.265 | ISO/IEC 23008-2 compression standard following the finalisation of the standard last year “a formal subjective quality assessment has been executed using a large variety of video material, ranging from wide-screen VGA resolution up to 4K. The material had not previously been used in optimizing HEVC's compression technology”, MPEG, announced in a prepared statement, adding: “Clear evidence was found that HEVC is able to achieve 50% bitrate savings and more, compared to the AVC High Profile”. The results will be made publicly available in the report N14420, to be published on the MPEG website.

HEVC's scope continues to be extended with a call for proposals on Screen Content Coding, the compression of video containing rendered graphics and animation. This extention is scheduled for completion by the end of 2015.

HEVC's 2nd edition includes support for additional colour formats and higher precision. The ‘Range Extensions amendment’, with technology allowing efficient compression of video content for colour sampling formats beyond 4:2:0 and up to 16 bits of processing precision has been finalised. In particular, the lossless and near lossless range of visual quality is more efficiently compressed than is possible with the current version 1 technology.

Web Video Coding the standard for a worldwide, reasonable and non-discriminatory, and free of charge licensable online compression scheme for use in browser has reached the final stage before approval. MPEG expects to complete the Final Draft International Standard in February 2015.

MPEG is also working on standardising what it refers to as ‘free-viewpoint television’. A public seminar on FTV (Free-viewpoint Television) will be held on July 8th to align MPEG's future standardization of FTV technologies with users and industry needs. Targeted application scenarios are Super Multiview Displays “where hundreds of very densely rendered views provide horizontal motion parallax for realistic 3D visualization, extracted from a dense or sparse set of input views/cameras in a circular or linear arrangement”.

“Integral Photography, where 3D video with both horizontal and vertical motion parallax are captured for realistic display”. And “Free Navigation that allows the user to freely navigate or fly through the scene, not just along predefined pathways”.

MPEG expects that future FTV systems will require new functionalities such as a substantial increase in coding efficiency and rendering capability compared to technology currently available. The FTV initiative will also consider novel means for acquiring 3D content that have recently emerged, e.g. plenoptic and light field cameras. You are invited to join the FTV seminar to learn more about MPEG activities in this area and to help revolutionise the viewing experience.

The increases in spatial resolution and colour resolution, scalable coding and autostereoscopic 3D, or in MPEG speak Multi-view, leads to amendments in the trusty old MPEG-TS, transport stream layer. The amendment specifies transport of layered coding extensions for the scalable and multiview enhancements of HEVC, and the signaling of associated descriptors so that different layers can be encapsulated and transported individually.

To support the new standards to offer 4K/8K UHDTV services by using a newly developed MPEG standard, MPEG Media Transport (MMT) an effort to promote the new transport layer standard has begun in Japan with a growing number of companies implementing MMT for various applications. MPEG is organising MMT Developers' Day in conjunction with its 109th meeting this July in Sapporo, Japan.

MPEG is also working on 3D audio and dynamic range control. The DRC system provides comprehensive control to adapt the audio as appropriate for the particular content, the listening device, environment, and user preferences. The loudness control can be applied to meet regulatory requirements and to improve the user experience, especially for content with large loudness variations.

By Donald Koeleman, Broadband TV News

LG Electronics Expands 'Second Screen' TV Ecosystem with Open-Source, Multi-Platform 'Connect SDK'

LG Electronics is making Connect SDK, an open source software development kit, available to Google Android and iOS developers to extend their mobile experience across tens of millions of big TV screens around the world.

By unifying device discovery and connectivity across multiple television platforms, Connect SDK is the first to truly address the complexity associated with implementing second screen capabilities while reaching the largest installed base of smart TVs and connected devices.

For consumers, this means that LG's new Smart TVs, powered by the webOS platform, as well as other popular TV devices, will be able to connect and interact with more mobile apps – further enhancing their second screen television experience.



Using Connect SDK, application developers can connect their mobile applications with 2014 LG webOS Smart TVs, LG Smart TVs from 2012 and 2013, Roku, Google Chromecast, and Amazon Fire TV devices:

  • Mobile applications with photos, videos, and audio can beam media to four TV platforms. Applications with YouTube videos can beam them to all LG Smart TVs dating back to 2012, Roku 3, Google Chromecast, Amazon Fire TV, and most DIAL-enabled devices.
  • Developers can build TV-optimized web applications and media viewers and use them across LG webOS Smart TVs and Google Chromecast.
  • TV application developers can use their mobile apps to promote the existence of their TV app on 2014 LG webOS Smart TVs and 2013 LG Smart TVs, as well as Roku devices.
 Source: LG

Mezzanine Film Format Plugfest

The adoption of the Digital Cinema projection has opened new opportunities for the distribution of films. The digitization of commercial film collections and film heritage is developing rapidly. Up to now there has been no specific format for the interoperable exchange and conservation of cinematographic works with the highest required quality.

In February 2011, at the request of the French Centre National du Cinema, the Commission Supérieure Technique started to work on the technical recommendation CST-RT021. The goal of this recommendation is to ensure that the commercial exploitation of digitized cinematographic works on modern digital distribution channels is made possible with the required quality.

The first version of the CST-RT021-MFFS specification has been published. It has been designed to be consistent with current developing standards, specifically the SMPTE Interoperable Master Format (Technical Committee 35 PM). The expert group led by CST is proposing that film laboratories, audio-visual equipment manufacturers and all interested parties participate in a Plugfest to test the specification.

This event will allow testing the wrapping, encoding of image and sound and colour coding detailed in the CST-RT021-MFFS, in order to ensure the interoperability of commercial solutions.

Source: ETSI

DASH Talks NAB 14

During the 2014 NAB show in Las Vegas, the DASH Industry Forum organized a 2hr session with nine presentations giving you the latest technical, business and deployment updates on MPEG-DASH. Speakers represented companies from across the DASH ecosystem. Here is the video replay of this event.

DPP Launches a Producer’s Guide to File Delivery

As the UK broadcast industry moves towards full digital delivery, the DPP has launched A Producer’s Guide to File Delivery, a complete handbook that explains – from a production point of view – all that is involved and required in the new process.

The guide is published six months out from the 1st October, the date UK broadcasters will move to full digital delivery, and is a step-by-step guide to the process. The handbook includes guidance on the key stages of file delivery:
  • Completing the Programme – final Video and Audio
  • Creating MXF Files
  • QC Checks – Eyeball tests and Automated QC
  • PSE Checks
  • Adding DPP Metadata
  • Delivering DPP Standard Programme File
  • Late Changes before TX
Source: DPP

Content Versioning is Out of Control

At Technology Summit on Cinema here at NAB, Walt Disney’s Howard Lukk said there can now be a total of 35,000 possible versions of a movie that will have to be generated to serve all possible ways the movie can be scene. This is apparently based on a permutation (or multiplication) of all of the variables in creating a particular version.

This presumably means cinematic versions, packaged media versions and versions for cable, satellite, broadcast and internet distribution. While this high may be possible, it is also unlikely, but nonetheless is in the thousands and represents a huge challenge for the industry.

For theatrical distribution for example, Walt Disney’s Leon Silverman said they typically need to create over 100 masters of each film. That includes versions with different audio mixes, different languages and different platform specific needs.

To illustrate what they are doing he described two movies in his talk. The new movie “Planes” for example, had 126 different masters with one of the plane characters having a different name and artwork depending on what country it was screened in.

He also showed a clip from the movie “Frozen” playing the song “Let it Go” where every verse was sung in a different language. There must have been 40 different languages with each performance created by a difference signer, yet they were blended perfectly to sound seamless. Incredible! He also lamented the versioning that is required to market a film requiring dozens of thumbnail images that must be used on various web sites.

Overall, he described what he called the “new post post world order,” which he said has changed the landscape for just about every aspect of the way movies are made today. He started his talk by noting that the workflow of cinema is increasingly being merged with the TV production workflow and that it may be hard to tell the difference in the future. He then gave details on ways the industry is complex (versioning being one aspect), connected, global and secure. While the title of the session was “From Camera to Consumer”, he renamed it “From Camera to Netflix”.

Filmmakers must work in a connected, networked global environment, but he did not seem particularly concerned about technology being able to handle the needs going forward. Security is more of a concern for Disney with isolated networks, multiple security protocol and audits done to help protect their IP. Success or failure here can have huge impacts on the studio and careers as well.

His description of what is needed was so incredible it led others on the panel to hope they never had his job.

How one gets to 35,000 versions is still a little unclear, but presumably includes all the different aspect ratios, video formats, audio formats, broadcast formats, Internet formats and may include encoding formats and all of those variables as well. While a studio would not necessarily have to produce all those versions, someone somewhere would adding enormous overheads to the process.

One solution to the format issue is a project that was also described at the event called IMF (Interoperable Master Format). This is an industry-wide effort started by the major studios that began as a Business to Consumer version of the Business to Business cinema formatting standards effort that is now called DCI (Digital Cinema Initiatives). Speakers from Disney, Sony and Fox described their efforts to create a SMPTE standard (now issued) and to implement the format at their studios.

The basic idea is to be able to have a “core framework” that consists of the main visual aspects plus a series of “modular application” that are plug ins to the format that add specific functionality like codecs, specific resolutions and frame rates and other aspects. This is all managed by a Composition Play List (CPL). This allows the generation of localized versions from a single file format.


Basic structure of IMF package


While IMF doesn’t reduce or eliminate versioning needs, it does help to create a file structure that is much more efficient in the way the versions are created and has a huge impact on storage space needed for all the versions.  Both Sony and Fox cited incredible storage savings (on the order of 25X) for projects they have initiated using IMF.

Sony’s project for example, was to create 100 UHD resolution titles that they could use in the roll out of their UHD/4K TVs, which they have now done. For Sony, this meant going back to the original masters of each film and remastering a finished film in UHD resolution in the xvYCC/rec. 709 color space and encoding in YCbCr using the IMF App 2 (broadcast profile level 5) at a 400 Mbps average and 800 Mbps max rate.




As shown in the graphic, Sony has now created IMF versions of 104 feature films and 140 TV episodes.  And the space savings are incredible. The uncompressed versions of these films is a whopping 1,001 TB while the IMF version are only 43 TB.

By Chris Chinnock, Display Central

Ultra HD in 2014: A Real World Analysis

An interesting white paper about UHD by Harmonic.

Vantrix Open Sources Free HEVC Encoder

Vantrix announced the creation of an open source version of the H.265 encoder, calling it the F265 project. The project aims to accelerate the industry-wide development and adoption of H.265, also known as High Efficiency Video Coding (HEVC).

H.265 is the successor to the industry standard H.264 codec used for video compression. The new specification, ratified in 2013, provides for double the data compression ratio of H.264 while ensuring the same level of visual quality. It is expected to be a major driver in the adoption of 4k Ultra High Definition and beyond by reducing the amount of transmission bandwidth required versus current standards.

Vantrix's F265 encoder will be licensed under the OSI BSD terms, enabling access to source code, free redistribution, and derived works. The intent is to encourage both researchers and commercial entities to contribute to the refinement and evolution of the code to accelerate the implementation of both software and hardware systems.

The project will initially target high quality offline encoding, but will not be limited to this scope. It is designed from the ground up to maximize quality and performance in offline and real time scenarios using recent hardware technology and interfaces such as the Intel AVX2 instruction set.

The F265 project site is in the process of being finalized and those interested in being notified of its availability can sign up at www.vantrix.com/f265.

Source: Vantrix

EBUCore v1.5 Includes the New EBU Audio Model (ADM)

The new version of EBUCore can be downloaded as EBU Tech 3293 v1.5. Thanks to the efforts of the metadata experts in the EBUCore developer and user community, the new version of the EBU's metadata flagship includes several enrichments.

The most prominent update probably is the integration of the recently published EBU Audio Definition Model (ADM) (EBU Tech 3364). The ADM provides a complete set of technical and informative metadata to describe a file's audio content.


Graphical representation of the EBU ADM (click to enlarge)


It is designed not only to support current channel-based audio configurations such as 5.1 and 15.1, but also to be ready for future formats, by using ADM extensions. The ADM is shared with other standards organisations, such as the AES, AMWA/EBU FIMS, ISO/IEC MPEG, ITU, SMPTE and W3C.

All EBUCore additions are clearly documented in the new version. Special attention has been paid to simplifying the specification, by focussing it on examples of implementations. A 'Download Zone' chapter provides links to the related Schema, including semantic technology in the form of the updated EBUCore RDF/OWL ontology.

Source: EBU

Microsoft Smooth Streaming Client 2.5 with MPEG DASH Support

The PlayReady team, working in conjunction with the Windows Azure Media Services team is pleased to announce the availability of the Microsoft Smooth Streaming Client 2.5 with MPEG DASH support.

This release adds the ability to parse and play MPEG DASH manifests in the Smooth Streaming Media Engine (SSME) to provide a Windows7/Windows8 and MacOS solution using MPEG DASH for On-Demand scenarios.

Developers that wish to move content libraries to DASH have the option of using DASH in places where Silverlight is supported. The existing SSME object model forms the basis of DASH support in the SSME. For example, DASH concepts like Adaptation Sets and Representations have been mapped to their logical counterpart in the SSME.

Also, Adaptation Sets are exposed as Smooth Streams and Representations are exposed as Smooth Tracks. Existing Track selection and restriction APIs can be expected to function identically for Smooth and DASH content.

In most other respects, DASH support is transparent to the user and a programmer who has worked with the SSME APIs can expect the same developer experience when working with DASH content.

Some details on the DASH support compared to Client 2.0:

  • A new value of ‘DASH’ has been added to the ManifestType enum. DASH content that has been mapped into Smooth can be identified by checking this property on the ManifestInfo object. Additionally the ManifestInfo object’s version is set to 4.0 for DASH content.
  • Support has been added for the four DASH Live Profile Addressing modes: Segment List, Segment Timeline, Segment Number, and Byte Range.
  • For byte range addressable content, segments defined in the SIDX box map 1:1 with Chunks for the track.
  • A new property, MinByteRangeChunkLengthSeconds, has been added to Playback Settings to provide SSME with a hint at the desired chunk duration.
  • Multiple movie fragments will be addressed in a single chunk such that all but the last chunk will have a duration greater than or equal to this property. For examples of how to set Playback Settings see the Smooth documentation.

There are some limitations in this DASH release, including:
  • Dynamic MPD types are not supported.
  • Multiple Periods are not supported in an MPD.
  • The EMSG box is not supported.
  • The codec and content limitations that apply to Smooth similarly apply to DASH. (see here)
  • Seeking is supported, but not trick play. Playback rate must be 1.
  • Multiplexed streams are not supported.
If you have feature requests, or want to provide general feedback, please use the Smooth Streaming Client 2.5 forum.

Source: Microsoft

DPP Launches Quality Control Guidelines

The DPP has released a set of standardised UK Quality Control Requirements to help producers carry out the necessary QC checks to ensure they deliver broadcast quality files, which meet the necessary standards.

The guidelines, which are published six months out from the 1st October, when UK broadcasters will make the move to full digital delivery, have been produced in collaboration with the EBU.

The EBU’s Strategic Programme for Quality Control is currently defining all possible QC criteria as well as guidance on their implementation.  The DPP has taken the EBU definitions and created a minimum set of tests and tolerance levels required for UK broadcasters.

Included in the new guidelines are AS-11 file compliance checks, and Automated Quality Control tests for Video and Audio. Examples are loudness levels, and freeze frames.

The guidelines also includes a list of ‘Eyeball’ tests that a producer needs to undertake before delivering the programme. Included on the checklist are such things as Audio Sync, Buzzing, Unclear Sound, Clock, and Visual Focus etc.

Commenting on the new work, Kevin Burrows, DPP Technical Standards Lead and CTO Broadcast & Distribution, Channel 4, said, “The DPP’s QC guidelines offer a standardised set of checks expected prior to the digital delivery of broadcast programmes to the UK Broadcasters. They are designed to streamline the QC process and help minimise the issues that can arise in programme delivery.”

Andy Quested, Principal Technologist BBC, who has been instrumental in devising the guidelines as well as leading the EBU’s work, added, “Post houses and broadcasters are seeing a significant increase in the volume of file-based programmes they need to handle. It is really important that the QC tests give accurate and consistent results. The new guidelines don’t just explain the process and the test to be carried out, they also make it clear who is responsible for signing off the QC process.”

The guidelines will be implemented by broadcasters as they move towards digital file delivery. Production companies will be required to deliver their compliant files along with a valid QC report, as has previously been the case with the PSE report.

The QC report can be delivered using a separate PDF or XML file output from the QC tool. DPP broadcasters will accept PDF QC reports initially, but will encourage XML reports over time once they are standardised by the EBU group and the DPP.

Source: Digital Production Partnership