Microsoft Releases Kinect SDK 1.7 Enabling 3D Scanning Capability

Microsoft on March 18, 2013 launched the Kinect for Windows SDK v1.7. The new SDK includes the Kinect Fusion tool that enables the Kinect for Windows sensor to scan and create accurate 3D models. A 3D model produced by a Kinect sensor and the new software is illustrated below.


  Kinect Fusion enables developers to create accurate 3-D renderings in real time.

In announcing the new Kinect SDK Bob Heddle, Microsoft, Director Kinect for Windows, has described Kinect Fusion as one of the most affordable 3D scanning tools available today for creating 3D renderings of people and objects. Heddle goes on to say, “Kinect Fusion fuses together multiple snapshots from the Kinect for Windows sensor to create accurate, full, 3D models.

Developers can move a Kinect for Windows sensor around a person, object, or environment and “paint” a 3D image of the person or thing in real time. These 3D images can then be used to enhance countless real-world scenarios, including augmented reality, 3D printing, interior and industrial design, and body scanning for things such as improved clothes shopping experiences and better-fitting orthotics.”

Reporting from Microsoft Research’s annual TechFest event (Mar. 5-7, 2013, Microsoft Conference Center, Redmond, Washington, USA) IEEE Spectrum has posted a video in which Microsoft researchers describe and demonstrate 3D scanning using Kinect Fusion.





In the video, researcher Toby Sharp of Microsoft’s Cambridge UK group describes the operation of the commercial Kinect sensor with the new Kinect Fusion software as it scans and is able to “reconstruct the world in 3D.” The video also describes the challenge presented in processing the large amount of data required for this task at 30 frames per second in near real time as the Kinect camera is moving about the object being scanned.

The video illustrates that relatively detailed 3D scans and solid models of objects and people can be captured with good detail using the inexpensive Kinect sensor when combined with a good deal of processing power provided by a presumably high performance GPU (Graphic Processing Unit).

In a prior entry on Microsoft’s Kinect for Windows blog, the operation of Kinect Fusion is further described as “…taking the incoming depth data from the Kinect for Windows sensor and using the sequence of frames to build a highly detailed 3D map of objects or environments. The tool then averages the readings over hundreds or thousands of frames to achieve more detail than would be possible from just one reading. This allows Kinect Fusion to gather and incorporate data not viewable from any single view point. Among other things, it enables 3D object model reconstruction, 3D augmented reality, and 3D measurements. You can imagine the multitude of business scenarios where these would be useful, including 3D printing, industrial design, body scanning, augmented reality, and gaming.”

The business scenarios environed by Microsoft would seem to fit in well with the current interests of businesses and consumers in 3D printing. Moreover, the ability to quickly create accurate 3D solid models of objects, people and environments using relatively inexpensive equipment should open up a wide range of market applications. In releasing the new SDK, Microsoft has taken a large step toward enabling new opportunities for developers and end users.

By Phil Wright, Display Central

BPM: Building on Broadcast Workflows

Over the last year or so, more broadcasters have seen how properly managed, IT-based workflow systems can radically improve both performance and accuracy in their operations. Despite the fact that many still are not taking advantage, it is time for all broadcasters to focus on the next objective, which should be the implementation of Business Process Management (BPM).

BPM is a technology that has been used in the world of business, and in particular manufacturing, for many years. It can be described as an over-arching layer that systematizes and automates processes and optimizes the use of available resources. A good example is the automobile industry, where cars are assembled in a production line based on orders from the sales department for variations of model, color and accessories.

BPM can be defined as a high-performance system for the control of operational processes, using well-managed operational workflows. Moreover, BPM implies that tasks performed in the broadcast environment need to be associated with a business deliverable. In practice, an efficient BPM implementation adapts to the workflows of each workgroup, allowing broadcasters to optimize daily tasks and identify opportunities to improve them.

In the context of media processing, you could define BPM as the layer above workflow that triggers actions and receives information about workflows and their component tasks. This is a valuable function, not least because it creates a link between the operational requirements of a facility (acquisition, archive and versioning, etc.) and the tasks that will deliver them according to business objectives. It should also provide metrics about human and technical resources, enabling systems to be fine-tuned in order to deliver better performance.


How Does This Work Today?
Arguably, all businesses, including media enterprises, need and have BPM, or they would never deliver anything. Historically, craft industries’ workflows are based on operator-performed tasks following requests from managers — in other words, a workflow based on human interaction. This type of working is still used in the majority of broadcast facilities, albeit supported by spreadsheets and other digital aids. However, these practices, which may have worked in the past, are proving increasingly impractical as facilities are required to deliver more complex programming — not only within the facility, but often over communication links to other facilities within the enterprise or beyond to third-party service providers and clients.

In high-level media management organizations, BPM oversees the production workflows of the company wherever it is practicable, enabling the traceability of the media and documentation and organizing tasks in work orders so that each process complies with quality standards of the organization. Crucially, BPM implies effective reporting, and it is this that allows managers to monitor production as a whole, as well as fine tune workflows for the future.

BPM may also be seen as bridging between the commercial requirements and expectations of, in this case, a media enterprise and the detailed operations and deliverables required on a day-to-day basis. It should also be capable of receiving tasks from higher levels in the organization. Typically, this means systems such as planning, marketing or enterprise resource management generally, none of which is able to process or deliver complex media. In this way, BPM can assist in unifying process management between the enterprise as a whole and the broadcast facility.


BPM vs. Workflow
In order to clarify what we mean, we need some definitions. When we talk about business processes, these include: program planning, program production, program acquisition, versioning, content research, content sales and distribution. On the other hand, the following are workflows: ingest, import, quality control, post production, subtitling/closed captions, archiving and export.

Figure 1 shows how a BPM/workflow implementation could look. In this example, the “Business Process” of content acquisition is interpreted by the BPM into a series of workflow tasks that are then executed under the control of a workflow manager that reports to the BPM as each stage is completed. This is the first major benefit of this kind of system because, at all times, the precise status of media items is available in the production facility or, if required, at the higher level to the business as a whole. Of course, this could be achieved with manual workflows operated by humans entailing operators gathering data and sending it to the BPM, but such systems are prone to delivering variable results.


Figure 1 - Shows a typical workflow for a BPM system.


The element missing in the previous diagram is the comprehensive reporting that is enabled, both back to the workflow manager and to the BPM system. This feature is critical as it enables concerned parties, both inside the media production area and the wider enterprise, to follow the real time status of content. Not only that, but this automated gathering of valuable information is virtually free. We will see later that this can be put to good use.


BPM Implementation
So, how should broadcasters go about implementing new technologies such as BPM to ensure delivery of the claimed benefits? Initially, this will be an exercise in analyzing the existing systems in order to establish how much the technology platforms need to be changed or evolved. Moving to file-based working is essential for this, and most broadcasters are already modifying their systems. Broadcast facilities not building from scratch will inevitably need to make decisions about staging the BPM implementation. Some departments may be better placed having already embraced file-based working, and these can be used as pilots to verify the benefits for other workgroups.

Having established the scope of the proposed new system, partners can be selected using the requirements found in the first phase to choose the most suitable candidates to supply a system.


Reaping the Rewards
While managing media operations, the BPM gathers a lot of data about current and previous workflows. This data is invaluable in measuring the performance of the media facility identifying, for instance, where there are bottlenecks, and where more human or IT resources are required to be deployed. It can also be used to deploy staff in the most effective ways, taking into account where more experienced operators are beneficially deployed, and where less experience ones can safely be used.

System managers who are planning to extend or enhance their capabilities can reliably predict the results of investing in new systems or hiring personnel — not based on opinion or hearsay but on solid data. Such requests are, of course, more likely to persuade the company executives that further investments will deliver the claimed performance.

In summary, there are many potential benefits in using a BPM strategy. These include: repeatable results, higher throughput, better use of human resources and better decision making on all levels.


BPM and Multi-Site Working
So far, we have looked at BPM in the context of a system within a single location. However, increasingly, media is moving from place to place. What happens if ingest is managed in one place but, due to local expertise or other factors, operations like subtitling or versioning might be required to be handled from a second or even a third location? In this case, how is the BPM affected, and how can workflows extend over multiple sites? Take the example shown in Figure 2.



Figure 2 - Where multiple sites are used, content is ingested or imported at the central site,
with proxy versions and metadata made available via a private cloud.


In this system, content is ingested or imported at the central (HQ) site. Proxy versions and metadata are made available to remote sites via a private cloud. In order to take part in workflows that are instigated at HQ, each remote site, although it has its own MAM system, is granted a floating license to access the central MAM in order to access the workflow schedule. Following this, operators at the remote sites can process tasks in the master BPM/workflow, perhaps, in this case, creating Spanish language tracks and subtitles. Other similar tasks can also be performed without having a local copy of the production resolution content. In other cases, such as post production, the master workflow should automatically initiate a file transfer to the appropriate site and then, following editing, recover the new version to the HQ system.

Using this methodology enables broadcasters to reallocate tasks based on local technical or human resources or to take advantage of available expertise across the enterprise as a whole.


Why Not Before?
If this is so easy and straightforward, why aren’t most broadcasters doing it already? Well, for a start, it’s only relatively recently that suitable products have been available in the market. In this context, when I use the word “products,” I mean solutions that can be deployed without extensive customization.

In recent years, the emphasis has been on integration with the goal of file-based operations on top of most agendas. In truth, the BPM paradigm won’t work well unless the majority of operational “islands” can be connected, and can therefore exchange the data that is vital to success.


Conclusion
The video content industry is moving rapidly away from relying on non-standard production and storage technologies that typically have been difficult to integrate to a file-based environment. This opens up the prospect of bringing all document formats — from text to rich media — into a single workflow. To achieve this goal requires the use of new software products that bring all processes together under common control.

Suitable workflow systems have been available on the market for several years. However, for practical reasons, broadcasters have tended to focus on achieving file-based workflows when considering upgrading their facilities. This is completely understandable, as moving program material between systems that cannot communicate with each other causes real problems for operators as they wrestle with manual workflows. In many cases, the increased throughput that file-based systems enable has the effect of highlighting the need for robust workflow engines and BPM.

The good news is that workflow systems available to broadcasters have evolved greatly, with some including BPM capability. Those broadcasters who have taken the plunge invariably report they are increasing their throughput over and above improvements delivered by file-based working.

In summary, broadcast technology is evolving faster than ever. The demands for more channels and innovative services are accelerating. Many production processes are no longer manageable with “human workflows.” Prescriptive and automated workflows together with BPM can make the difference.

Finally, if all of this isn’t enough to validate the BPM/workflow case, such systems gather the results and performance data that will prove increased efficiencies gained, not to mention making a strong case for future expansion of such systems to satisfy growing needs.

By Peter Gallen, Broadcast Engineering

Panasonic's New Color Filtering Technology



Source: Diginfo TV

Professional Transcoding with Consistent Color

5DtoRGB is an awesome tool that extracts every last drop of video quality from cameras that record to the AVC/H.264 video format. Cameras like the Canon EOS series of HDSLRs record video in this format with subsampled color.

Because of this compression, the picture is at risk of massive quality loss during the post production pipeline. By using a very high quality conversion process, 5DtoRGB gets you as close as possible to the original data off the camera's sensor while putting the brakes on any additional quality loss.

5DtoRGB is designed to transcode your footage to a format suitable for editing or visual effects purposes. Transcoding to formats like Apple ProRes or Avid DNxHD offer performance improvements during editing and keep compatibility with other editing systems in a collaborative environment.

Uncompressed formats like DPX are useful for visual effects creation (like pulling mattes from green screen footage), as uncompressed files retain the most image quality. Furthermore, visual effects compositing programs like After Effects or Nuke work with RGB color (not YCbCr, which is common in HDSLRs), and so a YCbCr to RGB conversion must be performed by either QuickTime or your compositing program before anything useful can be done.

The big problem is that you have to trust your NLE or compositing app to do a good job of performing this YCbCr to RGB conversion. Many programs use QuickTime internally to decode H.264 and perform the necessary YCbCr to RGB conversion, but its decoder is intended for general purpose use and not critical post-production use.

While this may be just fine for general activities like watching videos, it is unsuitable for professional post-production tasks. To add insult to injury, QuickTime adds noise to its H.264 output (and so does any program that uses QuickTime to decompress H.264) in what appears to be an attempt to cover up H.264 compression artifacts. And guess what? There's no way to disable this. You're stuck with it if you've converted your footage with Final Cut Pro, Compressor, MPEG Streamclip or Canon's E1 "Log and Transfer" plugin for Final Cut Pro. Each one of them uses QuickTime to decompress H.264. For an example of the results, click here.

5DtoRGB takes a no-compromise approach to quality. 5DtoRGB bypasses QuickTime decoding altogether, works internally at 10 bits and uses your video card's GPU for its YCbCr to RGB conversion. It also recognizes Canon's full range 8 bit YCbCr values (0-255), avoiding clipping and the resulting loss of picture information. The resulting files are the absolute highest quality you'll ever get out of the camera. In fact, you could argue that they're even better than the camera originals since they've undergone high quality chroma smoothing.

5DtoRGB supports both embedded timecode (used by the Canon 60D) and timecode stored in THM files. Start timecode values are derived from these sources, just like with Canon's official E1 plugin for Final Cut Pro and inserted into the DPX files or ProRes QuickTime files. You can also specify your own timecode value if you want.

5DtoRGB runs right now on Mac OS X and Windows. Linux users can run the Windows version using Wine.

Bridgetech Introduces Digital Media Monitoring on the iPhone

Bridge Technologies has launched PocketProbe, an iPhone app that enables objective analysis of real network performance of streaming media, in a simple to use, easy to understand tool that technical staff can carry anywhere.

Available now from Apple’s App Store, PocketProbe extends both the existing capabilities of digital media monitoring systems built from Bridge Technologies hardware probes, and the monitoring software environment.

Already providing the most comprehensive end-to-end monitoring and analysis capability, with a range of fixed and portable probes, the system now extends right into the engineer’s pocket.

PocketProbe contains the same OTT Engine found in the company’s VB1, VB2 and 10G VB3 series digital media monitoring probes, enabling confidence validation and analysis of http variable bit-rate streams from any location.



PocketProbe is available in two versions: the free application can validate five HLS streams in round-robin mode, provide analysis and manifest consistency alarms, play back media in the various profile bit-rates, and graphically display the actual chunk download patterns and bit-rates.
The full version also offers the ability to validate HDS and SmoothStream manifest files and store twenty-five streams with all profiles.

PocketProbe is easy to use, with a fully automatic set up: once the stream URL is input, the app finds all related profiles and validates the consistency. Since the PocketProbe uses exactly the same metric as in the hardware probes, the PocketProbe can be used by service engineers and operational staff to test real world behaviors post-cloud with various operators.

Accurate status of bit-rates used and profile changes is displayed in realtime, giving instant understanding of provider delivery capability. Together with hardware probes used pre-cloud, the post-cloud location of the PocketProbe enables excellent correlative understanding of CDN and provider abilities.

Source: TVB Europe

Hamburg Pro Media - MXF4mac Player

Open or just drag nearly any flavour of MXF files into the free MXF4mac Player to watch it, even in full screen. Control 8 tracks of audio, the framerate, Movie Time, Source Package Timecode, Frame Number and  Data Rate.

MXF4mac Player supports, XDCAM HD, AVC-Intra, JPEG2000 (optional with J2K Codec), DNxHD, HDV, Uncompressed SD/HD, Uncompressed Avid 10 bit, DVC-Pro HD, IMX-D10,DV, Meridien, Sony Proxy and more.

The MXF4mac Player is also able to play Panasonic P2 XML documents like movie files with video and audio linked together. Of course it is compatilbe to QuickTime movies and more (.mp4, .avi, .m4v, .wav, .aif, .aiff, etc.).

BitTorrent Live Hopes to Capture Digital Broadcast Market

BitTorrent has taken the wraps off of BitTorrent Live, a live streaming service in beta designed to establish a direct connection between amateur broadcasters and their viewers. It said that the technology transforms each person tuning in into "a miniature broadcaster."

The platform relies on the same peer-to-peer technology BitTorrent is known for in terms of file sharing. For the past three years however, company founder Bram Cohen has been working on a way to serve video in an IP broadcast environment more efficiently, according to TechCrunch.

Of course, Ustream, YouTube and others are already working on cracking the digital online broadcast nut at scale, but Cohen is banking on the peer-to-peer architecture to provide a differentiator. After installing a browser plug-in, video delay averages less than five seconds, the company said.

BitTorrent is also planning to use the technology to swim upstream, targeting TV stations and other large corporations looking for more cost-efficient ways to deliver high-scale digital video.

Cohen demonstrated the technology at the SF MusicTech Summit. "My goal is to kill off television," Cohen said, telling TechCrunch that "Television's physical infrastructure is inevitably going to go away, but TV as a mode of content consumption is here to stay."

Source: RapidTV News

Steadicam Progress – The Career of Paul Thomas Anderson in Five Shots




By Kevin B. Lee, Sight&Sound

How YouTube is Bringing Adaptive Streaming to Mobile, TVs

Have you ever played with the settings of a YouTube video to make it look better? YouTube Mobile and TV engineering head Andy Berkheimer would like you stop doing that.

Berkheimer headed a project last year that brought adaptive bitrate streaming to the YouTube desktop player, enabling the player to automatically switch between different video quality settings based on your internet connection speed, among other factors.

Now he is bringing the same technology to mobile devices and TVs. “We are making it work just as it should,” Berkheimer told me during an interview this week.

From 240p to 4K
That may sound simple, but optimizing video playback has been a long journey for the Google-owned video site. Berkheimer joined YouTube six years ago, when there was just one default video quality — 320×240, also known as 240p. “That was really, really grainy video,” recalled Berkheimer.

His team used Google’s cloud infrastructure to allow for additional codecs, bringing HD and eventually even 4k to the site. But with higher bitrates, buffering also became more of a problem.

The solution? Adaptive bitrate streaming, which is industry-speak for switching the quality of a video in midstream, without the need to re-buffer and start over. YouTube started switching from progressive downloads to adaptive bitrate streaming in its desktop player a year ago, and completed the process late last year.

The new player is keeping close eyes on the speed and health of your internet connection, explained Berkheimer: “It’s continuously monitoring the bandwidth and the throughput it is seeing,” he said, adding that it also keeps tabs on the size of your player.

Are you watching a video in full screen? Then you can expect YouTube to send you more bits, as long as your connection is fast enough.

YouTube’s Take on Adaptive Streaming
Adaptive streaming isn’t new: Companies like Netflix and Hulu have used the technology for some time to optimize their streaming experience. But YouTube had some unique challenges to solve when it rolled out its own implementation.

For example, Netflix often starts with a lower-bitrate stream and then slowly scales up, which is why it can take a minute or so before full HD quality sets in.

That approach doesn’t really work for YouTube videos that only last a minute or two. YouTube tends to be more aggressive in sending out higher-quality video, and then scales down the video if necessary, Berkheimer explained. The site also makes use of the fact that you often watch more than one YouTube video in a row, and optimizes your bit rate across an entire session.

The results of these efforts have been encouraging. YouTube has seen buffering reduced by 20 percent since it launched adaptive streaming for its desktop player. That’s why the company is now taking the technology to TVs and mobile devices.

Next Up: Mobile and TVs
Of course, TVs require a lot more HD video, and buffering becomes even more obvious when you compare it to the nonstop experience of a traditional broadcast. Berkheimer told me that YouTube is working with the majority of the TV industry to bring adaptive streaming to TV sets, and that virtually all models introduced at CES this year already support the technology. The company is also working to bring adaptive streaming of YouTube videos to game consoles.

Mobile, on the other hand, comes with different challenges, as people move in and out of the reach of cell towers while they get their video fix on public transport.

And then there is this: “One of the biggest challenges we have is the global nature of YouTube,” said Berkheimer. Average mobile internet speeds are much slower in India and Brazil than in the U.S. and Europe, but videos still have to play without long and tiresome buffering. Broadband in Canada on the other hand is fast, but tightly rationed, with major ISPs charging their customers extra if they go over their caps.

That’s also one reason that those settings that allow you to manually change the bitrate of a YouTube video haven’t disappeared from the player yet — even though Berkheimer would very much like them gone. He told me that there have been some passionate discussions within the company about these manual settings. The result? For now, they’re staying.

But Berkheimer and his team are still working hard so that you can completely ignore them. “The most rewarding thing is that users don’t have to think about it,” he said.

By Janko Roettgers, GigaOM

Google, Microsoft, Netflix Want Encrypted HTML5 Media Spec

Google, Microsoft, and Netflix have put forward a proposal that could add a level of copy protection to HTML5 audio and video. Encrypted Media Extensions would let apps on the web and elsewhere use keys to control who has access to a given media stream. It would allow any format that would work in HTML5 as long as the format itself can support some kind of key or bit.

If implemented, it could eliminate objections raised by Adobe as well as content providers regarding HTML5's limits. Paid movie services and others that want a secure stream have either limited HTML5 to native mobile apps, where users can't easily rip the feed, or avoided it altogether.

Netflix would have the option of dropping Microsoft's Silverlight plugin when on the web, while Google could skip Flash for its YouTube-based movie service in Android Market. Microsoft has been deprecating Silverlight and could use HTML5 even on the Xbox 360.

Concerns exist both on technical and philosophical levels. It may not necessarily provide the promised experience in at least the initial spec and could require revision. Despite promising to keep digital rights management out of HTML5, the trio have also raised alerts from Mozilla and others concerned it would break standards and force a less than ideal concern that might hurt some HTML5 supporters.

The World Wide Web Consortium, which has to consider the proposal, hasn't said when it might decide. As an unofficial draft, it could spend some time being completed before the question of approval and possible implementation comes up.

Source: Electronista

MPEG LA and Google Settle: What Does it Mean?

Last week, MPEG LA and Google announced that they entered into agreements granting Google a license to technologies that “may” be essential to VP8, the video codec fueling WebM.

While this agreement may have little impact in the traditional streaming market, it could be very significant in other markets, particularly WebRTC. No financial terms of the agreement were announced, and MPEG LA declined to answer our questions.

History
You know the players and the history, but let’s quickly recount. Google bought On2 in 2010 for $124 million, acquiring the VP8 codec that was the successor to On2’s wildly popular but never financially successful VP6 codec.

Google later released VP8 as the video component of the open source format, WebM. WebM was quickly implemented by Firefox, Opera and within Google’s own Chrome browser, but Apple and Microsoft declined to add the technology to their HTML5-compatible browsers, citing potential patent-infringement concerns.

Google later announced that it would remove H.264 playback from Chrome, a promise they have neither kept nor retracted. In the meantime, Mozilla, which never licensed H.264, saw its market share dropping while it waited for Google to follow through.

Recently, Mozilla changed their long-standing policy of not supporting patent-encumbered technologies, and decided to support H.264 playback via the HTML5 tag when H.264 playback is present in the system, which includes most mobile devices, and Windows versions starting with Windows 7 (but not XP).

WebM never gained significant traction in the marketplace, with the most recent MeFeedia blog post, HTML5 Based Video Availability, from December 2011, reporting less than 2% share. An April 2012 report from Sorenson Media showed WebM’s market adoption at 5%. Neither source identified how many of these streams originated from Google subsidiary YouTube.

According to tests published in Streaming Media, WebM’s market share wasn’t slowed by quality concerns, as the difference between WebM and H.264 were very minor. Rather, WebM seemed a solution without a problem, as H.264 usage, and web video in general, certainly hasn’t been hindered by the fact that H.264 is a patent-encumbered technology.

In February 2011, MPEG LA announced a call for patents essential to the VP8 video codec and twelve parties stepped forward, but no patent pool was ever formed and MPEG LA never identified the patent holders.

Although WebM was largely seen as dead in the water, as interest in the H.265/HEVC codec started to grow, the same patent-related concerns started to plague VP9, the next gen codec designed to compete against H.265/HEVC.

The Fine Print
Which brings us to the agreement. Let’s study the multiple relevant bits in the press release in turn.

“Granting Google a license to techniques that may be essential to VP8 and earlier-generation VPx video compression technologies under patents owned by 11 patent holders.”

There certainly is no public admission that VP8 technology infringed on any technology. For an amusing counter view, see Apple Insider’s “Google admits its VP8/WebM codec infringes MPEG H.264 patents.” While this is probably meaningless, it’s interesting that the though 12 parties stepped forward in MPEG-LA’s call for patents, only 11 patent holders were party to the agreement.

“The agreements also grant Google the right to sublicense those techniques to any user of VP8, whether the VP8 implementation is by Google or another entity.”

This makes it clear that all licenses of VP8/WebM are also clear from any claim of patent infringement from the MPEG-LA H.264 patent group. Since this was a major reason why Microsoft and Apple refused to license WebM, they’ll now either license the technology or find another major reason not to.

“It further provides for sublicensing those VP8 techniques in one next-generation VPx video codec.”

This makes it clear that VP9 is also covered in the agreement, but not VP10 or any successor.

“As a result of the agreements, MPEG LA will discontinue its effort to form a VP8 patent pool.”

MPEG LA’s quid pro quo; disbanding the VP8 patent pool.

Again, MPEG LA refused to answer any questions, so this is all that we--or presumably any other journalists--actually know. The rest we can only surmise.

Impact on Existing Streaming Markets
At this point, it’s hard to see that the agreement will have any short-term impact on the streaming media marketplace. Whether Apple or Microsoft will integrate WebM into their browsers is anyone’s guess, but even if they do, there’s little incentive for most producers of free internet video to adapt WebM, since there is no H.264 royalty on free internet video, and H.264 plays everywhere, from the lowest power mobile device to all OTT Platforms.

If WebM browser-support becomes ubiquitous, or if Adobe adds WebM “in an upcoming release” of Flash player, those selling subscription-based or pay-per-view content might want to encode desktop-bound streams in WebM, though mobile support, particularly iOS support, could take longer, and conceivably may never happen.

Looking forward, this announcement does clear the air for VP9’s charge against H.265/HEVC, but Google faces an uphill battle against a standards-based codec whose ultimate success will depend largely upon hardware support for encode and decode.

According to a Google presentation made at the Internet Engineering Task Force meeting in Atlanta in November, 2012, VP9 is still about 7% behind H.265 in quality so there are other hurdles to VP9’s success.

It’s WebRTC, Stupid
Perhaps the real focus of Google’s VP8-related efforts, going back to the original On2 acquisition, relate to WebRTC, which is an HTML5-compatible, open framework enabling real time communications in the browser.

Essentially, WebRTC is Skype without the app, enabling web-based communications between all supporting browsers, and was a component of the technologies discussed by Google’s Matt Frost in his keynote speech at Streaming Media East in 2012.

In February, 2013, Google and Mozilla demonstrated WebRTC interoperability between Firefox and Chrome, a significant milestone in WebRTC’s success.

Interestingly, Google acquired the WebRTC infrastructure when they bought Global IP Solutions Holding in May 2011, for $68.2 million, around the same time that the On2 acquisition closed.

Several codecs, including H.264, have been proposed as the Mandatory to Implement (MTI) codec for WebRTC, but the potential for patent infringement claims have hindered VP8’s adoption.

Beyond the MPEG LA agreement, Google “submitted VP8 as a candidate for standardization in ISO/IEC JTC1 SC29 WG11 (better known as MPEG)” and is now aggressively pushing VP8 as the “most suitable code for MTI.”

As always, Google’s profit motive for WebRTC is opaque. The best contender for an actual motive was proposed by Tsahi Levent-Levi in the VisionMobile blog, who opined, “The real value for Google lies in allowing them to serve more ads and mine more insights out of people’s browser behavior – these are things that Google treasures”.”

Whatever the motive, WebM certainly has a greater chance of succeeding in a nascent market like WebRTC than in existing markets that have already embraced other technologies.

By Jan Ozer, StreamingMedia

Netflix Launches Global ISP Speed Index Website

Netflix unveiled its Global Speed Index website Monday, aggregating performance results from its 33 million worldwide subscribers in one place, and allowing users to see which ISP offers the best Netflix performance in their country.

And guess which country is leading the charge, offering its citizens some of the fastest Netflix speeds? That’s right, the United States. However, U.S. broadband only came in first because of Google Fiber, whose very few actual customers saw an average Netflix speed of 3.35 Mbps in February.

 
How U.S. ISPS are performing for Netflix viewing (click to enlarge)


Second in is the U.K., where Virgin customers averaged 2.37 Mbps during the same month. At the bottom of the list is Mexico, where the fastest ISP averaged at 2.10 Mbps.

Of course, these speeds are far below what most ISPs advertise for their services, but the averages include lower-bitrate SD fare, network slowdowns due to poor Wifi performance and all kinds of other factors. Or, as Netflix puts it:

“The average is well below the peak performance due to many factors including home Wi-Fi, the variety of devices our members use, and the variety of encodes we use to deliver the TV shows and movies we carry. Those factors cancel out when comparing across ISPs, so these relative rankings are a good indicator of the consistent performance typically experienced across all users on an ISP network.”

Still, the site is an interesting tool to compare broadband speeds both within the countries in which Netflix is active as well as between those markets — and for the company, it’s another way to nudge ISPs toward signing up for Netflix’s own CDN.

By Janko Roettgers, GigaOM