The Current State of Android and Video

In the OS landscape, Android is by far the most widely used for tablets and mobile devices. With over 900 million current activated users and approximately 70% market share in the space, the platform is not only the most popular, but it’s also the most fragmented in terms of OEM’s and OS versions out there.

Android History and Origins
Android is a Java-Based operating system used for mobile phones and tablet computers. It was introduced and developed initially by Android, with financial backing from Google. Google acquired Android in 2005. Google announced their mobile plans for Android in 2007 and the first iteration of Android hit the shelves in 2008. Android is the world’s most widely distributed and installed mobile OS. However, in terms of application usage and video consumption, iOS devices lead the way. A consistent user experience and standardized video playback are two of the reasons for this.

Performance of Video on Android
When running video on Android devices, the experience varies from OEM to OS version to media source. Because of this lack of standardization with video, we wanted to give an overall look at the top mobile devices running Android to determine how video is delivered and how it performs across a few key sites and platforms.

We tested the following top devices running the most used versions of Android (2.3, 4.0, 4.1 or 4.2):

  • Google Nexus 7
  • Google Nexus 4
  • Samsung Galaxy 4
  • HTC One
  • Samsung Galaxy II
  • HTC EVO 4G
In summary, the newer versions had overall better video capabilities and quality. On devices running 4.1 or higher, the video players were generally built in, and most were shown in one-third of the screen and ran with little interruption.

On the devices running older versions of Android, the experience was inconsistent across sites and the video performance wasn’t as strong. Some of the top video sites showed the option for either video display on a player or in the web browser, and some had very poor viewing capabilities for video.

A sample look at the different variations of video transfer and display on the Android devices is below:



Click to enlarge


Open Source on Android
Because Android is open source, Google releases the code under the Apache License. For this reason, every OEM modifies the open source code for their devices.

OEM’s create their own coding and specifications for Android for each device. This makes any standardization very difficult. When testing different versions of Android on different target devices, there are a lot of inconsistencies.

Google regularly releases updates for Android which further confuses things. End users often do not upgrade, either because they don’t know how, or because their device does not support the new release. The scattered consistency of updates further confuses any efforts at standardization.

Two of the largest and most widely used Android OEM’s both released their latest open source codes earlier this year. For the Samsung Galaxy codes please click here. For the HTC One click here.


Versions of Android
In 2008 Android v1.0 was released to consumers. Starting in 2009, Android started using dessert and confection code names which were released in alphabetical order: Cupcake, Donut, Eclair, Froyo, Gingerbread, Honeycomb, Ice Cream Sandwich, and the latest, Jelly Bean.

A historical look at the Android devices is below:




In terms of market share, the Gingerbread remains the most popular.





Top Android Devices per OS version
The top devices running Android in terms of both sales and popularity come from various OEM’s, with the majority from Samsung, HTC, LG and Asus. A few of the top devices from the most widely used Android OS versions are as follows:



Click to enlarge


DRM Content on Android
Android offers a DRM framework for all devices running their 3.0 OS and higher. Along with their DRM framework, they offer consistent DRM for all devices using Google’s Widevine DRM (free on all compatible Android devices) which is built on top of their framework. On all devices running 3.0 and higher, the Widevine plugin is integrated with the Android DRM framework to protect content and credentials. However, the content protection depends on the OEM device capabilities. The plug in provides licensing, safe distribution and protected playback of media content.

The image below shows how the framework and Widevine work together.



Click to enlarge


Closed Captions on Android
As developers know, closed captioning is not a simple “feature” of video that can be simply activated. There are a number of formats, standards, and approaches and it’s especially challenging for multiscreen publishers. On Android devices, closed captioning varies from app to app. However, any device using Jelly Bean 4.1 or higher can use their media player which supports internal and external subtitles. Click here for more information.

For any device using the Gingerbread version or lower which do not have any support for rendering subtitle, you can either add subtitle support yourself or integrate a third party solution.

Most larger broadcasters pushing content to OTT devices now serve closed captioning on Android (Hulu Plus, HBO GO, and Max Go to name a few).


Does Android support HLS?
Android has limited support for HLS (Apple’s HTTP Live streaming protocol), and device support is not the same from one version or one device to the next. Android devices before 4.x (Gingerbread or Honeycomb), do not support HLS. Android tried to support HLS with Android 3.0, but excessive buffering often caused streams to crash. Devices running Android 4.x and above will support HLS, but there are still inconsistencies and problems.





Best Practices for Video on Android
For deploying video on Android, there are several suggested specifications to follow. Below is a list of files supported by Android devices. Developers can also use media codecs either provided by any Android-powered device, or additional media codecs developed by third-party companies. If you want to play videos on Android, find a multi-format video player or convert videos to Android compatible formats using an encoding company.





Video Specifications for Android
Below are the recommended encoding parameters for Android video from the Android developer homepage. Any video with these parameters are playable on Android phones.





Video Encoding Recommendations
This table below lists examples of video encoding profiles and parameters that the Android media framework supports for playback. In addition to these encoding parameter recommendations, a device’s available video recording profiles can be used as a proxy for media playback capabilities. These profiles can be inspected using the CamcorderProfile class, which is available since API level 8.




For video content that is streamed over HTTP or RTSP, there are additional requirements:
  • For 3GPP and MPEG-4 containers, the moov atom must precede any mdat atoms, but must succeed the ftypatom.
  • For 3GPP, MPEG-4, and WebM containers, audio and video samples corresponding to the same time offset may be no more than 500 KB apart. To minimize this audio/video drift, consider interleaving audio and video in smaller chunk sizes.
For information about how to target your application to devices based on platform version, read Supporting Different Platform Versions.

Source: Encoding.com

NTU Invention Allows Clear Photos in Dim Light

Cameras fitted with a new revolutionary sensor will soon be able to take clear and sharp photos in dim conditions, thanks to a new image sensor invented at Nanyang Technological University (NTU).

The new sensor made from graphene, is believed to be the first to be able to detect broad spectrum light, from the visible to mid-infrared, with high photoresponse or sensitivity. This means it is suitable for use in all types of cameras, including infrared cameras, traffic speed cameras, satellite imaging and more.

Not only is the graphene sensor 1,000 times more sensitive to light than current low-cost imaging sensors found in today’s compact cameras, it also uses 10 times less energy as it operates at lower voltages. When mass produced, graphene sensors are estimated to cost at least five times cheaper.

Graphene is a million times smaller than the thickest human hair (only one-atom thick) and is made of pure carbon atoms arranged in a honeycomb structure. It is known to have a high electrical conductivity among other properties such as durability and flexibility.

The inventor of the graphene sensor, Assistant Professor Wang Qijie, from NTU’s School of Electrical & Electronic Engineering, said it is believed to be the first time that a broad-spectrum, high photosensitive sensor has been developed using pure graphene.

His breakthrough, made by fabricating a graphene sheet into novel nano structures, was published in Nature Communications, a highly-rated research journal.

“We have shown that it is now possible to create cheap, sensitive and flexible photo sensors from graphene alone. We expect our innovation will have great impact not only on the consumer imaging industry, but also in satellite imaging and communication industries, as well as the mid-infrared applications,” said Asst Prof Wang, who also holds a joint appointment in NTU’s School of Physical and Mathematical Sciences.

“While designing this sensor, we have kept current manufacturing practices in mind. This means the industry can in principle continue producing camera sensors using the CMOS (complementary metal-oxide-semiconductor) process, which is the prevailing technology used by the majority of factories in the electronics industry. Therefore manufacturers can easily replace the current base material of photo sensors with our new nano-structured graphene material.”

If adopted by industry, Asst Prof Wang expects that cost of manufacturing imaging sensors to fall - eventually leading to cheaper cameras with longer battery life.

How the Graphene Nanostructure Works
Asst Prof Wang came up with an innovative idea to create nanostructures on graphene which will “trap” light-generated electron particles for a much longer time, resulting in a much stronger electric signal. Such electric signals can then be processed into an image, such as a photograph captured by a digital camera.

The “trapped electrons” is the key to achieving high photoresponse in graphene, which makes it far more effective than the normal CMOS or CCD (Charge-Coupled Device) image sensors, said Asst Prof Wang. Essentially, the stronger the electric signals generated, the clearer and sharper the photos.

“The performance of our graphene sensor can be further improved, such as the response speed, through nanostructure engineering of graphene, and preliminary results already verified the feasibility of our concept,” Asst Prof Wang added.

This research, costing about $200,000, is funded by the Nanyang Assistant Professorship start-up grant and supported partially by the Ministry of Education Tier 2 and 3 research grants.

Development of this sensor took Asst Prof Wang a total of 2 years to complete. His team consisted of two research fellows, Dr Zhang Yongzhe and Dr Li Xiaohui, and four doctoral students Liu Tao, Meng Bo, Liang Guozhen and Hu Xiaonan, from EEE, NTU. Two undergraduate students were also involved in this ground-breaking work.

Asst Prof Wang has filed a patent through NTU’s Nanyang Innovation and Enterprise Office for his invention. The next step is to work with industry collaborators to develop the graphene sensor into a commercial product.

Source: Nanyang Technological University

Digital Camera Add-On Means the Light's Fantastic

KaleidoCamera is developed by Alkhazur Manakov of Saarland University in Saarbrücken, Germany, and his colleagues. It attaches directly to the front of a normal digital SLR camera, and the camera's detachable lens is then fixed to the front of the KaleidoCamera.

After light passes through the lens, it enters the KaleidoCamera, which splits it into nine image beams according to the angle at which the light arrives. Each beam is filtered, before mirrors direct them onto the camera's sensor in a grid of separate images, which can be recombined however the photographer wishes.

This set-up allows users to have far more control over what type of light reaches the camera's sensor. Each filter could allow a single colour through, for example, then colours can be selected and recombined at will after the shot is taken, using software. Similarly, swapping in filters that mimic different aperture settings allows users to compose other-worldly images with high dynamic range in a single shot.

And because light beams are split up by the angle at which they arrive, each one contains information about how far objects in a scene are from the camera. With a slight tweak to its set-up, the prototype KaleidoCamera can capture this information, allowing photographers to refocus images after the photo has been taken.



Roarke Horstmeyer at the California Institute of Technology in Pasadena says the device could make digital SLR photos useful for a range of visual tasks that are normally difficult for computers, like distinguishing fresh fruit from rotten, or picking out objects from a similarly coloured background. "These sorts of tasks are essentially impossible when applying computer vision to conventional photos," says Horstmeyer.

The ability to focus images after taking them is already commercially available in the Lytro – a camera designed solely for that purpose. But while Lytro is a stand-alone device which costs roughly the same as an entry-level digital SLR, KaleidoCamera's inventors plan to turn their prototype into an add-on for any SLR camera.

Manakov will present the paper at the SIGGRAPH conference in Anaheim, California, this month. He says the team is working on miniaturising it, and that much of the prototype's current bulk simply makes it easier for the researchers to tweak it for new experiments.

"A considerable engineering effort will be required to downsize the add-on and increase image quality and effective resolution," says Yosuke Bando, a visiting scientist at the MIT Media Lab. "But it has potential to lead to exchangeable SLR lenses and cellphone add-ons."

In fact, there are already developments to bring post-snap refocusing to smartphone cameras, with California-based start-up Pelican aiming to release something next year.

"Being able to convert a standard digital SLR into a camera that captures multiple optical modes – and back again – could be a real game-changer," says Andrew Lumsdaine of Indiana University in Bloomington.

By Hal Hodson, New Scientist

MPEG-DASH: Making Tracks Toward Widespread Adoption

The need to reach multiple platforms and consumer electronics devices has long presented a technical and business headache, not to mention a cost for service providers looking to deliver online video. the holy grail of a common file format that would rule them all always seemed a quest too far.

Enter MPEG-DASH, a technology with the scope to significantly improve the way content is delivered to any device by cutting complexity and providing a common ecosystem of content and services.

The MPEG-DASH standard was ratified in December 2011 and tested in 2012, with deployments across the world now underway. Yet just as MPEG-DASH is poised to become a universal point for interoperable OTT delivery comes concern that slower-than-expected initial uptake will dampen wider adoption.

A Brief History of DASH
The early days of video streaming, reaching back to the mid-1990s, were characterized by battles between the different technologies of RealNetworks, Microsoft, and then Adobe. By the mid-2000s, the vast majority of internet traffic was HTTP-based, and Content Delivery Networks (CDNs) were increasingly being used to ensure delivery of popular content to large audiences.

“[The] hodgepodge of proprietary protocols -- all mostly based on the far less popular UDP -- suddenly found itself struggling to keep up with demand,” explains Alex Zambelli, formerly of Microsoft and now principal video specialist for iStreamPlanet, in his succinct review of the streaming media timeline for The Guardian.

That changed in 2007 when Move Networks introduced HTTP-based adaptive streaming, adjusting the quality of a video stream according to the user’s bandwidth and CPU capacity.

“Instead of relying on proprietary streaming protocols and leaving users at the mercy of the internet bandwidth gods, Move Networks used the dominant HTTP protocol to deliver media in small file chunks while using the player application to monitor download speeds and request chunks of varying quality (size) in response to changing network conditions,” explains Zambelli in the article. “The technology had a huge impact because it allowed streaming media to be distributed ... using CDNs (over standard HTTP) and cached for efficiency, while at the same time eliminating annoying buffering and connectivity issues for customers.”

Other HTTP-based adaptive streaming solutions followed: Microsoft launched Smooth Streaming in 2008, Apple debuted HTTP Live Streaming (HLS) for delivery to iOS devices a year later, and Adobe joined the party in 2010 with HTTP Dynamic Streaming (HDS).

HTTP-based adaptive streaming quickly became the weapon of choice for high-profile live streaming events from the Vancouver Winter Olympics 2010 to Felix Baumgartner’s record breaking 2012 Red Bull Stratos jump (watched live online by 8 million people).

These and other competing protocols created fresh market fragmentation in tandem with multiple DRM providers and encryption systems, all of which contributed to a barrier to further growth of the online video ecosystem.

In 2009, efforts began among telecommunications group 3rd Generation Partnership Project (3GPP) to establish an industry standard for adaptive streaming. More than 50 companies were involved -- Microsoft, Netflix, and Adobe included -- and the effort was coordinated at ISO level with other industry organizations such as studio-backed digital locker initiator Digital Entertainment Content Ecosystem (DECE), OIPF, and World Wide Web Consortium (W3C).

MPEG Dynamic Adaptive Streaming over HTTP (MPEG-DASH, or DASH for short) was ratified as an international standard late in 2011. It was published as ISO/IEC 23009-1 the following April and was immediately heralded as a breakthrough because of its potential to embrace and replace existing proprietary ABR technologies and its ability to run on any device.

At the time, Thierry Fautier, senior director of convergence solutions at Harmonic, said the agreement on a single protocol would decrease the cost of production, encoding, storage, and transport: “This is why everyone is craving to have DASH. It will enable content providers, operator and vendors to scale their OTT business,” he told CSI magazine in February 2012.

In the same article, Jean-Marc Racine, managing partner at Farncombe, said, “By enabling operators to encode and store content only once, [DASH] will reduce the cost of offering content on multiple devices. Combined with Common Encryption (CENC), DASH opens the door for use with multiple DRMs, further optimising the cost of operating an OTT platform.”

The Benefits of DASH
The technical and commercial benefits outlined for MPEG-DASH on launch included the following:

  • It decouples the technical issues of delivery formats and video compression from the more typically proprietary issues of a protection regime. No longer does the technology of delivery have to develop in lockstep with the release cycle of a presentation engine or security vendor.

  • It is not blue sky technology -- the standard acknowledged adoption of existing commercial offerings in its profiles and was designed to represent a superset of all existing solutions.

  • It represented a drive for a vendor-neutral, single-delivery protocol to reduce balkanization of streaming support in CE devices. This would reduce technical headaches and transcoding costs. It meant content publishers could generate a single set of files for encoding and streaming that should be compatible with as many devices as possible from mobile to OTT, and also to the desktop via plug-ins or HTML5; in addition, it meant consumers would not have to worry about whether their devices would be able to play the content they want to watch.

“DASH offers the potential to open up the universe of multi-network, multi-screen and multi-operator delivery, beyond proprietary content silos,” forecast Steve Christian, VP of marketing at Verimatrix. “In combination with a robust protection mechanism, a whole new generation of premium services are likely to become available in the market.”

Perhaps the biggest plus was that unlike previous attempts to create a truly interoperable file format, without exception all the major players participated in its development. Microsoft, Adobe, and Apple -- as well as Netflix; Qualcomm; and Cisco -- were integral to the DASH working group.

These companies, minus Apple, formed a DASH Promoters Group (DASH-PG), which eventually boasted nearly 60 members and would be formalized as the DASH Industry Forum (DASH-IF), to develop DASH across mobile, broadcast, and internet and to enable interoperability between DASH profiles and connected devices -- exactly what was missing in the legacy adaptive streaming protocols.

The European Broadcasting Union (EBU) was the first broadcast organization to join DASH-IF, helping recommend and adopt DASH in version 1.5 of European hybrid internet-TV platform HbbTV. Other members have since boarded, including France and Spain, which have already begun deploying DASH for connected TVs, with Germany and Italy expected to follow. In the U.S., DASH is attracting mobile operators, such as Verizon, wanting to deploy eMBMS for mobile TV broadcast over LTE.

What about HLS?
However, there remain some flies in the ointment. The format for DASH is similar to Apple’s HLS, using index files and segmented content to stream to a device where the index file indicates the order in which segments are played. But even though representatives from Apple participated in drawing up DASH, Apple is holding fast to HLS and hasn’t yet publicly expressed its support for DASH.

Neither has Google, though it has confirmed that the standard is being tested in Google Chrome. Some believe that until DASH is explicitly backed by these major players, it will struggle to gain traction in the market.

“Right now there are multiple streaming options and until Apple and Google agree on DASH, it will be a while before there is widespread adoption,” says Hiren Hindocha, president and CEO of Digital Nirvana.

Adobe has encouragingly adopted the emerging video standard across its entire range of video streaming, playback, protection, and monetization technologies. Its backing will greatly reduce fragmentation and costs caused by having to support multiple video formats.

“We believe that if we have Microsoft, Adobe, and to some extent Google implementing MPEG-DASH, this will create a critical mass that will open the way to Apple,” says Fautier. “Timing for each of those companies is difficult to predict though.”

While Apple HLS has considerable momentum, other adaptive streaming protocols are being dropped in favor of DASH, which observers such as David Price, head of TV business development for Ericsson, and Brian Kahn, director of product management for Brightcove, reckon will mean that there will only be two mainstream protocols in use for the vast majority of streaming services.

“Since both Adobe and Microsoft have been pushing DASH as a standard, we can assume that HDS and Smooth Streaming will be replaced by DASH helping to reduce the number of formats,” wrote Kahn in a Brightcove blog post. In an email to me, Kahn wrote, “Additionally, Google Canary has a plugin for MPEG-DASH and it is rumoured that Google created the plug-in internally. In the end, we will probably end up with two main streaming formats: HLS and DASH.”

So why doesn’t the industry just adopt HLS instead of adding another streaming protocol? Kahn’s email points to two reasons. “First, it’s not officially a standard -- it’s a specification dictated by Apple, subject to change every six months. It also doesn’t have support for multi-DRM solutions -- DASH does, which is why most major studio houses have given it their endorsement.”
Other Roadblocks to Adoption

But the road to DASH adoption won’t be a straight one. Kahn highlights in particular the challenge of intellectual property and royalties. “This is undoubtedly an issue which will need to be addressed before DASH can achieve widespread adoption,” he told Streaming Media. “DASH participants such as Microsoft and Qualcomm have consented to collate patents for a royalty free solution, but the likes of Adobe have not agreed.

“Mozilla does not include royalty standards in its products, but without the inclusion of its browser, the likelihood of DASH reaching its goal of universal adoption for OTT streaming looks difficult,” Kahn adds. “Another potential obstacle to standardisation is video codecs -- namely, the need for a standard codec for HTML5 video. Even with universal adoption of DASH by HTML5 browsers, content would still need to be encoded in multiple codecs.

Ericsson’s Price also notes some concern about the way in which DASH is being implemented: “In regards to the elements that are discretionary, particularly in the area of time synchronization, it is hoped that as adoption becomes wider, there will be industry consensus on the implementation details; the best practise guidelines being created by DASH-IF will further accelerate adoption.”

There are further warnings that delays in implementing DASH could harm its success as a unifying format. A standards effort necessarily involves compromises, and probably the biggest compromises get hidden in the profile support in the overall standards effort. MPEG-DASH in its original specification arguably tried to be everything to everyone and perhaps suffered from excessive ambiguity (a story familiar to anyone acquainted with HTML5 video, notes Zambelli wryly).

“There are several trials and lots of noise about MPEG-DASH, but we’ve yet to see concrete demand that points to DASH being the great unifier,” warns AmberFin CTO Bruce Devlin. “In fact, unless there is some operational agreement on how to use the standard between different platform operators, then it might become yet another format to support.”

“DASH has taken quite a while to gather a following among consumer electronics and software technology vendors, delaying its adoption,” reports RGB Networks’ senior director of product marketing Nabil Kanaan. “The various profiles defined by DASH have added too much flexibility in the ecosystem, at the cost of quick standardisation. We still believe it’s a viable industry initiative and are supporting it from a network standpoint and working with ecosystem partners to make it a deployable technology.”

Elemental Technologies’ VP of marketing, Keith Wymbs, adds, “To date the impact of MPEG-DASH has been to spur the discussion about the proliferation of streaming technologies.”

“MPEG-DASH isn’t in a position where people are thinking that it will be the only spec they’ll need to support in the near to mid-term,” says Digital Rapids marketing director Mike Nann, “but most believe that it will reduce the number of adaptive streaming specifications that they’ll need to prepare their content for.”

Jamie Sherry, senior product manager at Wowza Media Systems, also thinks DASH has had very little impact to date other than to re-emphasise that for high-quality online video to really become profitable and widespread: “Issues like streaming media format fragmentation must be addressed.

“If the ideals of MPEG-DASH become a reality and traction occurs in terms of deployments, the impact to the market will be positive as operators and content publishers in general will have a real opportunity to grow their audiences while keeping costs in line,” he says.

DASH-AVC/264
To address this, the DASH-IF has been hard at work defining a subset of the standard to serve as a base profile that all implementations have to include. This is driven by the decision to focus on H.264/MPEG-4 encoding rather than MPEG-2 (initially both were supported). The result, DASH-AVC/264, was announced in May and is widely tipped to achieve broad adoption by speeding up the development of common profiles that can be used as the basis for interoperability testing.

“As an analogy, look back at the evolution of MPEG-2 and Transport Streams,” says Nann. “If every cable operator, encoder, middleware vendor, and set-top box vendor supported a different subset of parameters, profiles, levels, and features, they might all be within the MPEG-2 and TS specs, but we probably wouldn’t have the widespread interoperability (and thus adoption) we have today. DASH-AVC/264 is a means of doing the same for MPEG-DASH, providing a constrained set of requirements for supporting DASH across the devices that support it, and giving vendors interoperability targets.”

Aside from requiring support for H.264, the DASH-AVC/264 guidelines define other essential interoperability requirements such as support for the HE-AAC v2 audio codec, ISO base media file format, SMPTE-TT subtitle format, and MPEG Common Encryption for content protection (DRM).

“The Common Encryption element is particularly interesting because it enables competing DRM technologies such as Microsoft PlayReady, Adobe Access, and Widevine to be used inclusively without locking customers into a particular digital store,” writes Zambelli. “DASH-AVC/264 provides the details desperately needed by the industry to adopt MPEG-DASH and is expected to gain significant traction over the next one to two years.”

Digital Rapids’ Nann says he expects to see increased adoption in 2013 with “considerably more pilot projects as well as commercial deployments,” with growing device support (particularly for consumer viewing devices). “The client device support is going to be one of the biggest factors in how quickly MPEG-DASH rolls out,” says Nann.

Telestream product marketing director John Pallett concurs: “The primary driver for adoption will be the player technology to support it. The companies that develop players are generally working to support MPEG-DASH alongside their legacy formats. Most of the major player companies want to migrate to DASH, but real adoption will come when a major consumer product supports DASH natively. This has not yet happened, but we anticipate that it will change over the next year.”

For Peter Maag, CMO of Haivision Network Video, the value proposition is simple: “MPEG-DASH will simplify the delivery challenge if it is ubiquitously embraced. Realistically, there will always be a number of encapsulations and compression technologies required to address every device.”

The number of trials are growing and already include the world’s first large-scale test of MPEG-DASH OTT multiscreen at the 2012 London Olympics with Belgian broadcaster VRT, and the first commercial MPEG-DASH OTT multiscreen service with NAGRA and Abertis Telecom in 2012 -- both powered by Harmonic.

“Over the next years, we believe a significant amount of operators will deploy OTT and multiscreen services based on DASH,” suggests Fautier.

In an interview with Streaming Media, Kevin Towes, senior product manager at Adobe, declared 2012 as the year of DASH awareness and 2013 as the year of discovery.

“How can you attach some of these encoders and CDNs and players and devices together to really demonstrate the resiliency and the vision of what DASH is trying to present?” he said. “And then as we go through that it’s about then operationalizing it, getting DASH into the hands of the consumers from a more viewable point of view.”

Elemental Technologies’ Wymbs believes the discussion will evolve in the next 12 months “to one centering on the use of MPEG-DASH as a primary distribution format from centralized locations to the edge of the network where it will then be repackaged to the destination format as required.”

Given the number of elements of the value chain that need to line up for full commercialization -- encoders, servers, CDNs, security systems, and clients as a minimum -- significant commercial rollouts were always likely to take time.

In conclusion, while there are still hurdles to clear, DASH is clearly on the path toward widespread adoption, especially now that DASH-AVC/264 has been approved. According to contributing editor Tim Siglin: “If there is some form of rationalization between HLS and DASH, including the ability to include Apple’s DRM scheme in the Common Encryption Scheme, we might just note 2013 not only as the beginning of true online video delivery growth but also as the point at which cable and satellite providers begin to pay attention to delivery to all devices -- including set-top boxes -- for a true TV Everywhere experience.”

By Adrian Pennington, StreamingMedia

Introduction to Video Coding




By Iain Richardson, Vcodex

Google Adds its Free and Open-Source VP9 Video Codec to Latest Chrome Build

Google announced it has enabled its VP9 video codec by default on the Chrome dev channel. The addition means users of the company’s browser can expect to see the next-generation compression technology available out-of-the-box before the end of the year.

In May, Google revealed it was planning to finish defining VP9 on June 17, after which it would start using the technology in Chrome and on YouTube. On that day, the company enabled the free video compression standard by default in the latest Chromium build, and now it has arrived in the latest Chrome build.

VP9 is the successor to VP8, both of which fall under Google’s WebM project of freeing Web codecs from royalty constraints. Despite the fact that Google unveiled WebM three years ago at its I/O conference, VP8 is still rarely used when compared to H.264, today’s most popular video codec.

“A key goal of the WebM Project is to speed up the pace of video-compression innovation (i.e., to get better, faster), and the WebM team continues to work hard to achieve that goal,” Google says. “As always, WebM technology is 100% free, and open-sourced under a BSD-style license.”

For users, the main advantage of VP9 is that it’s 50 percent more efficient than H.264, meaning that you’ll use half the bandwidth on average when watching a video on the Internet. Yet that doesn’t take H.265 into account, the successor to H.264 that offers comparable video quality at half the number of bits per second and also requires its implementers to pay patent royalties.

Google today claimed VP9 “shows video quality that is slightly better than HEVC (H.265).” The company is of course biased, but we’re sure that comparisons by third-parties will start to surface soon.

In the meantime, Google says it is working on refining the VP9 toolset for developers and content creators as well as integrating it with the major encoding tools and consumer platforms. VP9 is already available in the open-source libvpx reference encoder and decoder, but Google still plans to optimize it for speed and performance, as well as roll out improved tools and documentation “over the coming months.”

VP9 is also meant to become part of WebRTC, an open project that lets users communicate in real-time via voice and video sans plugins, later this year. Google has previously said it wants to build VP9 into Chrome, and YouTube has also declared it would add support once the video codec lands in the browser.

The dev channel for Chrome is updated once or twice weekly. Since the feature has made it in there, it won’t be long before it shows up in the beta channel, and then eventually the stable channel.

By Emil Protalinski, The Next Web