The Economic Impact of Lord of the Rings
Source: OnlineMBA
A curation about new media technologies
Source: ImpulseKitty Productions
VP9, the successor to Google's VP8 video compression technology at the center of a techno-political controversy, has made its first appearance outside Google's walls.
Google has built VP9 support into Chrome, though only in an early-stage version of the browser for developers. In another change, it also added support for the new Opus audio compression technology that's got the potential to improve voice communications and music streaming on the Internet.
VP9 and Opus are codecs, technology used to encode streams of data into compressed form then decode them later, enabling efficient use of limited network or storage capacity. Peter Beverloo, a developer on Google's Chrome team, pointed out the new codec support in a blog post earlier this month.
Releasing VP9 gives Google a chance to improve the video-streaming performance and improve other aspects of VP8. That's important in competing with today's prevailing video compression technology, H.264, and with a successor called H.265 or HEVC that also has the potential to be attract broad support across the electronics and computing industry with better compression performance.
Codecs might seem an uninteresting nuts-and-bolts aspect of computing, but they actually ignite fierce debates that pit those who like H.264's convenience and quality against those who like that Google offers VP for free use.
H.264 is used in videocameras, Blu-Ray discs, YouTube, and more. But most organizations using it must pay patent royalties to a group called MPEG LA that licenses H.264-related patents on behalf of their many owners.
Google has tried to spur adoption of VP8 instead, which it's released for royalty-free use. One major area: online video built into Web pages through the HTML5 standard.
However, VP8 hasn't dented H.264's dominance, and VP8 allies failed in an attempt to specify VP8 as the way to handle online video. As a result, HTML5 video can be invoked in a standard way, but Web developers can't easily be assured that a browser can properly decode the video in question. Internet Explorer and Safari support H.264 video, Firefox and Opera support VP8 video, and Chrome supports both codecs.
Google had tried to encourage VP8 adoption by pledging in 2011 to remove H.264 support from Chrome, but it reversed course and left the support in. Mozilla, several of whose members were bitter about Google's reversal, has since begun adapting Firefox so it can use H.264 when the operating system supports it. Windows 7 and 8, Apple's OS X and iOS, and Google's Android all have H.264 support built in.
One cloud that's hung over VP8 is the possibility that others besides Google would demand royalty payments for patented technology it uses. Indeed, MPEG LA requested such organizations come forth as it considered adding VP8 licensing program, and it said last year that 12 organizations have said they have patents essential to VP8 use.
But it's been nearly two years since MPEG LA issued started seeking VP8-related patents, and the organization still hasn't offered a license.
The VP8 and VP9 codecs have their origins at On2 Technologies, a company Google acquired for $123 million. Google and assorted allies combined VP8 with the freely usable Vorbis audio codec to form a streaming-video technology called WebM.
By Stephen Shankland, CNET
An interesting free guide.
By Sareesh Sudhakaran, Wolfcrow
Microsoft today announced that it is launching a preview version of a Smooth Streaming plugin for the Open Source Media Framework (OSMF) player. Developers can use Smooth Streaming capabilities in any OSMF-compliant player, as well as Adobe's own Strobe player.
"We are pleased to announce that Windows Azure Media Services team released a preview of Microsoft Smooth Streaming plugin for OSMF," wrote Cenk Dingiloglu, a program manager on the Windows Azure Media Services team, in a Microsoft IIS blog posting. He also provided a link, for developers who want to integrate the plugin, to a set of documents and licensing requirements.
In a series of meetings last Thursday on the Microsoft campus in Redmond, Washington, the Windows Azure Media Services team laid out their strategy on a number of fronts, including the extension of Smooth Streaming client software development kits (SDKs) to embedded devices, iOS devices, and player frameworks.
During one of those Microsoft-sponsored meetings, hosted by Microsoft senior technical evangelist Alex Zambelli, Dingiloglu and Mike Downey discussed the most recent addition of OSMF support, noting that Smooth Streaming shares similarities when it comes to codecs and the use of the fragmented MP4 file.
"Support for the same audio and video codecs, H.264 and AAC, respectively," said Dingiloglu, "provides the opportunity to use fMP4, leveraging the best of both the OSMF framework and the Smooth Streaming Client SDK."
The Smooth Streaming plugin will provide some key features of Smooth Streaming, such as on-demand functionality (play, pause, seek, stop), but will also use OSMF built-in API hooks to support two key features: multiple audio language switching and maximum playback quality selection.
OSMF supports late binding, based on its use of fMP4, allowing multiple languages to be accessible to the end user without requiring all possible languages' audio tracks to be multiplexed together into a single Transport Stream, the way that iOS devices require.
OSMF and a Strobe player support also provides Microsoft a way onto the Android OS platform, too, making it possible for Smooth Streaming content to reach Android-powered smartphones and tablets.
"You can build rich media experiences for Adobe Flash Player endpoints using the same back-end infrastructure you use today to target Smooth Streaming playback to other devices like Win8 store apps, browser and so on," Dingiloglu wrote in the IIS blog post.
Microsoft isn't claiming the new OSMF plugin is ready for prime time quite yet, but I was able to see a working version of Smooth Streaming within an OSMF player during last week's visit.
In fact, one of the more impressive demonstrations was that of a playlist/manifest file that contained both Adobe .f4v files and Microsoft .ism files. The OSMF player seamlessly switched between the two fMP4 file formats, allowing content owners to intermix content from either format for playback.
"As this is a preview release, you're likely to hit issues, have feature requests, or want to provide general feedback," wrote Dingiloglu. "We want to hear it all! Please use the Smooth Streaming plugin for OSMF forum thread to let us know what's working, what isn't, and how we can improve your Smooth Streaming development experience for OSMF applications."
All of this raises the question around Smooth Streaming as it relates to MPEG DASH, the ratified dynamic adaptive streaming standard. Like Adobe, which noted it will continue to develop its own HTTP Dynamic Streaming (HDS) flavor of HTTP-delivered adaptive bitrate streaming, Microsoft sees a benefit in continuing to push the envelope with Smooth Streaming.
The company made it clear that it fully supports DASH, and yet it sees Smooth Streaming as a test bed in which it can continue to innovate for major events like the Olympic Games, which served as a catalyst - over the past three Games - for a number of innovations that now find their way into both Windows Azure Media Services and DASH.
The Smooth Streaming plugin requires browsers supporting Flash Player 10.2 or higher and also requires OSMF 2.0. Microsoft provides licensing details for the Smooth Streaming plugin for interested developers.
By Tim Siglin, StreamingMedia
As professionals in the video industry know, building the best video processing systems takes top-notch engineering and countless hours of testing a wide range of content. Ultra-high resolution 4K video, generally 3840 x 2160, is on the immediate horizon and poised to enter the mainstream. However, bringing 4K to the masses faces an obstacle: a dearth of quality test content.
Elemental decided to remedy this problem, and just in time for the holidays. Remember those classic test sequences from a couple decades ago? We picked the best of the best clips and recreated them using a RED Epic 4K camera. These clips are now available for download, in compressed and uncompressed formats.
Users of the Firefox web browser on Windows can now dump Adobe Flash and still watch H.264-encoded videos online. Fresh overnight builds of Firefox 20 will now play footage found on HTML5 websites, such as YouTube and Vimeo, that use the patent-encumbered video codec - without the need for Adobe's oft-criticised plugin, which also handles H.264.
The Mozilla Foundation, which makes Firefox, slipped support for the popular video compression standard into beta-test versions of its browser by drilling into Microsoft's Media Foundation, which does the actual H.264 video decoding.
Mozilla is averse to proprietary codecs because they're typically buried under patents and require a licensing fee. By using the video support built into the operating system, the open-source browser maker can sidestep these constraints.
The codec support is not enabled by default and requires at least Windows Vista, although support isn't there for Windows 8 yet. Official Firefox 20 builds are due to be released in April 2013.
Firefox for Android 4.x already supports H.264, again using the operating system and underlying hardware to decode the video for playback. Mozilla reluctantly added the ability to play the high-definition format on Google's platform in March to compete in the mobile arena.
The organisation had hoped patent-free codecs, such as Google’s VP8, would succeed at the expense of H.264 on the web, but that hasn’t happened. Google bought VP8 in 2009 as WebM from On2 Technologies for $124.6m and released it under a royalty-free licence in May 2010.
However, H.264, which is licensed from the MPEG-LA patent pool, remains the standard for video playback for desktop web browsing and handheld devices.
As Firefox on Android gained support for the codec, Mozilla chief technology officer Brendan Eich wrote at the time: “H.264 is absolutely required right now to compete on mobile. I do not believe that we can reject H.264 content in Firefox on Android or in B2G and survive the shift to mobile.”
By Gavin Clarke, The Register
Verizon has filed a patent application for targeting ads to viewers based on information collected from infrared cameras and microphones that would be able to detect conversations, people, objects and even animals that are near a TV.
If the detection system determines that a couple is arguing, a service provider would be able to send an ad for marriage counseling to a TV or mobile device in the room. If the couple utters words that indicate they are cuddling, they would receive ads for "a romantic getaway vacation, a commercial for a contraceptive, a commercial for flowers," or commercials for romantic movies, Verizon states in the patent application.
For years, technology executives have discussed the possibility of using devices such as Microsoft's Xbox 360 Kinect cameras to target advertising and programming to viewers, taking advantage of the ability to determine whether an adult or child is viewing a program. But Verizon is looking at taking targeted advertising to a new level with its patent application, which is titled "Methods and Systems for Presenting an Advertisement Associated with an Ambient Action of a User."
Similar to the way Google targets ads to Gmail users based on the content of their emails, Verizon proposes scanning conversations of viewers that are within a "detection zone" near their TV, including telephone conversations.
"If detection facility detects one or more words spoken by a user (e.g., while talking to another user within the same room or on the telephone), advertising facility may utilize the one or more words spoken by the user to search for and/or select an advertisement associated with the one or more words," Verizon states in the patent application.
Verizon says the sensors would also be able to determine if a viewer is exercising, eating, laughing, singing, or playing a musical instrument, and target ads to viewers based on their mood. It also could use sensors to determine what type of pets or inanimate objects are in the room.
"If detection facility detects that a user is playing with a dog, advertising facility may select an advertisement associated with dogs (e.g., a dog food commercial, a flea treatment commercial, etc.)," Verizon writes in the patent application.
Several types of sensors could be linked to the targeted advertising system, including 3D imaging devices, thermographic cameras and microphones, according to the patent application.
Verizon also details how it may be able link smartphones and tablet computers that viewers are using to the detection system.
"If detection facility detects that the user is holding a mobile device, advertising facility may be configured to communicate with the mobile device to direct the mobile device to present the selected advertisement. Accordingly, not only may the selected advertisement be specifically targeted to the user, but it may also be delivered right to the user's hands," Verizon writes in the patent application.
The targeted advertising system is one of the innovations that Verizon could potentially develop through a joint innovation lab it has created with Comcast, Time Warner Cable and Bright House Networks. Earlier this month, Comcast CFO Michael Angelakis said that engineers from Comcast and Verizon have been meeting on the West Coast to work on developing products and services. The innovation lab, which is focused on developing advanced products that take advantage of cable programming and the Verizon Wireless platform and devices, was formed after Comcast and other major MSOs agreed to sell Advanced Wireless Services (AWS) spectrum to Verizon last year.
Officials at Verizon declined to comment about the patent application.
The inventors named on the patent are Verizon Solutions Engineer Brian Roberts, Manager of Convergence Platforms Anthony Lemus, Verizon Wireless Director of Product Design Michael D'Argenio and Verizon Technical Manager Don Relyea. Verizon filed the patent application in May 2011. It was published by the U.S. Patent & Trademark Office on Thursday.
By Steve Donohue, FierceCable
Friday, December 21, 2012
Many content owners who want to get their live event-based streaming content on mobile phones and tablets quickly find out that getting it to Android devices is extremely challenging. Unlike Apple’s iOS platform, Google has yet to provide an easy way to get live video to Android devices, and to date, it hasn’t detailed any strategy for fixing the situation. Many content owners I have spoken with, as well as those who help these content owners encode and distribute their video, are now questioning why they should even continue to go through all the trouble of trying to support Android-based devices at all.
While media companies can always build an app for their event series, most do one-off events and are faced with streaming to the mobile web and reaching their audience using browsers on Android and iOS devices. When Android phones became popular, live video was supported in the mobile browser via Adobe Flash, so digital video professionals with live content to distribute were able to keep doing what they were doing on the desktop. That’s not to say that Flash was perfect; in many cases these desktop players were heavy, containing ad overlays and metadata interaction that had a major impact on the playback quality. To get better quality video playback, some people turned to RTSP delivery. Android touted RTSP as its native live video format until Android 2.3.4 came out, after which that feature no longer worked.
The most effective way to get live video to Android browsers was to make a stripped-down Flash player that didn’t demand much from the phones. Video was decoded in the software, but it would drain batteries quickly. It was imperfect, but it functioned well enough. With the introduction of Android 3.0, it looked like HLS support was going to be built-in for all future devices, and that has held true — sort of. HLS support doesn’t match the specification, and buffering is common. Industry-leading HLS implementations such as those from Cisco and Akamai Technologies will not load on Android devices, so for the most part, content owners went back to Flash. But now Flash isn’t available for new Android phones.
Right now, content owners are left in an awkward state if they want to deliver live video to Android browsers. If Flash is present, you can deliver a basic Flash video player. If it is not, you can try to deliver HLS, but the HLS manifests must either be hand-coded or created using Android-specific tools. If the HLS video can play without buffering, you’ll find that there is no way to specify the aspect ratio, so in portrait mode it looks broken. The aspect ratio problem seems to have been fixed in Android 4.1, but it will often crash if you enter video playback in landscape mode and leave in portrait. You can allow the HLS video to open and play in a separate application, but you lose the ability to communicate with the page, and exiting the video dumps users back on their home screens.
Content owners can still send the same live video to iOS devices that they could in 2009, and it will play smoothly with little buffering. Live video support for browser-based streaming within Android tablets and phones is a significant challenge with little help available from Google. And with Google still talking about removing H.264 video support in Android, many content owners are wondering why they should even try to support Android any longer.
What’s clear is that Google doesn’t have a strategy to fix the problem, and many content owners and video ecosystem vendors are frustrated. Content owners want to get their live video on as many devices and platforms as possible, and right now, getting it to Android devices is very difficult and costly. Unless Google steps in to solve the problem, don’t expect content owners to continue to try to support Android devices for live video streaming.
By Dan Rayburn, Streaming Media
Netflix launched in Denmark, Norway, Sweden, and Finland on Oct. 15th. I just returned from a trip to Europe to review the content deliveries with European studios that prepared content for this launch.
This trip reinforced for me that today’s Digital Supply Chain for the streaming video industry is awash in accidental complexity. Fortunately the incentives to fix the supply chain are beginning to emerge. Netflix needs to innovate on the supply chain so that we can effectively increase licensing spending to create an outstanding member experience. The content owning studios need to innovate on the supply chain so that they can develop an effective, permanent, and growing sales channel for digital distribution customers like Netflix. Finally, post production houses have a fantastic opportunity to pivot their businesses to eliminate this complexity for their content owning customers.
Everyone loves Star Trek because it paints a picture of a future that many of us see as fantastic and hopefully inevitable. Warp factor 5 space travel, beamed transport over global distances, and automated food replicators all bring simplicity to the mundane aspects of living and free up the characters to pursue existence on a higher plane of intellectual pursuits and exploration.
The equivalent of Star Trek for the Digital Supply Chain is an online experience for content buyers where they browse available studio content catalogs and make selections for content to license on behalf of their consumers. Once an ‘order’ is completed on this system, the materials (video, audio, timed text, artwork, meta-data) flow into retailers systems automatically and out to customers in a short and predictable amount of time, 99% of the time. Eliminating today’s supply chain complexity will allow all of us to focus on continuing to innovate with production teams to bring amazing new experiences like 3D, 4K video, and many innovations not yet invented to our customer’s homes.
We are nowhere close to this supply chain today but there are no fundamental technology barriers to building it. What I am describing is largely what www.netflix.com has been for consumers since 2007, when Netflix began streaming. If Netflix can build this experience for our customers, then conceivably the industry can collaborate to build the same thing for the supply chain. Given the level of cooperation needed, I predict it will take five to ten years to gain a shared set of motivations, standards, and engineering work to make this happen. Netflix, especially our Digital Supply Chain team, will be heavily involved due to our early scale in digital distribution.
To realize the construction of the Starship Enterprise, we need to innovate on two distinct but complementary tracks. They are:
Wednesday, December 19, 2012
This presentation was made at the MPEG meeting in Shanghai, China, in October 2012, related to the input contribution M26906. It gives the details about the demonstration made during the meeting.
This demonstration showed the use of the Google Chrome browser to display synchronized video and subtitles, using the Media Source Extension draft specification and the WebVTT subtitle format. The video and DASH content was prepared using GPAC MP4Box tool.
At the request of the Federal Agencies/Library of Congress, the AMWA is launching a new application specification, AS-07: MXF Archiving & Preservation. This new AS is a vendor neutral sub-set of MXF for long-term archiving and preservation of moving image essence and associated materials including audio, still images, captions and metadata.
For Netflix vice-president of digital supply chain Kevin McEntee, the US-based video streaming company's shift to using the cloud for transcoding its massive content library comes down to a modern take on the fable of the tortoise and the hare. Or, as he told the audience at the AWS re:Invent conference in Las Vegas last week, it's like a choice between moving a room full of people to another city by using expensive high-performance Ferraris or a fleet of somewhat more humble Toyota Priuses.
When it began its shift away from DVD rentals to Internet-based video streaming, Netflix initially employed a 'Ferrari' approach to dealing with the computationally intensive task of encoding movies and TV shows in a format that could be streamed to client devices.
In 2006/2007 when Netflix began the move to streaming, it found that the video processing technology typically employed in Hollywood centred on ensuring minimal latency: It was optimised for scenarios such as a single video editor mastering a Blu-ray image of a movie: "[It was] optimised for the expensive time of that one operator; essentially the artist sitting there doing that mastering."
"Back in 2006/2007 we hired people out of this industry and we ended up building out a data centre that [was] very Ferrari-like," McEntee said, with custom, GPU-based encoding hardware; "boxes that had custom GPUs that were built specifically to dump video very fast." It was expensive and constrained by the fixed footprint of Netflix's data centre.
However the limitations of this approach became evident in late 2008, when Netflix set out to launch new video players for PCs and Macs, and jumped onto TVs by launching a player on the Xbox in November of that year.
"There was such an amazing lack of standardisation around video streaming those days, and there still is today, that we had to create new formats for those players," McEntee said.
"And at the same time [as] we were innovating in the player space and therefore causing the need for new formats, our content team in LA was licensing more and more content, so our content library during the course of that project also doubled in size. And so we set out re-encode using the hardware farm that we had built."
Unfortunately for Netflix, the hardware didn't deal well with the load, and the company encountered frequent hardware failures. Fans on the custom GPUs being used were too small and "boxes were melting", McEntee said.
"It was really a very frustrating experience and in fact that catalogue re-encode was late and we failed. Basically we launched these players and the catalogue was not complete."
It was reflecting on this experience that caused Netflix to make the move to the Amazon Web Services cloud for transcoding. "If you jump forward a year, we had made the jump to move our transcoding farm into AWS. And we had seen the opportunity in fall 2009 for launching a video player on the [Sony PlayStation 3] so this was our first 100 per cent AWS transcode. "
"The player developers again realised they had to rely on a new format; they had to transcode the entire library," McEntee said. The new format was not finalised until three or four weeks before the launch of the new player, but Netflix was "able to spin up enough instances in EC2 to transcode the entire library in about three weeks" and managed to meet the deadline.
This is where the Ferrari versus Prius metaphor comes in. In McEntee's (somewhat elaborate) analogy, individual Ferraris offer great performance, but are expensive to buy, expensive to repair and available in limited numbers; whereas the hypothetical Prius fleet won't be quite so swift, but can be rented for a lot less than it would cost to buy Ferraris, repairs are someone else's problem and they're available in large numbers.
As McEntee explained, "By moving to the cloud while ... one encode was slower the overall throughput of the whole system was much, much faster." It's a question of "thinking horizontal, not vertical," he said, with an architecture that isn't optimised for latency but for overall throughput.
Netflix "haven't really missed deadlines" since the shift away from relying on in-house hardware for transcoding, he said. But, "even more than not missing deadlines, this change has actually created opportunities for the business."
His favourite example is from February 2010, when Apple approached Netflix about the impending iPad launch. Cupertino told Netflix it wanted the company to be part of the launch - which meant yet another video format had to be supported. Using its cloud-based approach to transcoding meant that Netflix was able to have its entire content library available for the April iPad launch.
"This is an opportunity we didn't anticipate when we set out to do the AWS project, but what we found is that having this ability to scale the whole system quickly without doing any purchasing or building out a data centre ourselves really just made the business very nimble and you really can't put a price on nimble, especially in a business that's moving as fast as Netflix," McEntee said.
Netflix's expansion into non-US territories - Canada in 2010, Latin American countries in 2011, and a number of European countries earlier this year - involved building up new content catalogues specific to each licensing territory, meaning a lot more transcoding using the cloud in order to meet fixed launch deadlines.
Netflix currently uses a media processing pipeline dubbed Matrix. Content partners such as movie studios deliver content to Netflix, with the video streaming company employing Aspera's "Direct-to-S3" service to house it in Amazon's Simple Storage Service (S3).
Netflix then uses technology from start-up eyeIO and Amazon's EC2 service to transcode the source material received from the studios into multiple formats that can be streamed to the range of devices supported by the company. The results are stored in S3, before being sent to Netflix's CDN for streaming; Netflix creates multiple versions of each movie or TV show episode to stream to devices ranging from TVs to tablets to gaming consoles. The transcoding farm uses 6000-6500 EC2 instances.
The company is currently working on a successor for Matrix, dubbed Maple. Instead of using Matrix's approach of processing an entire piece of content at once, Maple will break videos up into five-minute chunks, each of which will be processed by a separate EC2 instance. McEntee said that the advantages include being more fault-tolerant - currently a job may fail mid-way through transcoding and have to be restarted from scratch - and the ability to deliver content faster in those cases where Netflix has an agreement to begin screening content the day after it first aired.
Netflix is also working on a 'digital vault' that can house video masters and secondary assets, such as audio in different languages, that could be delivered to both its systems and those of its competitors in the video streaming space.
The Visual Effects Society Technology Committee announced the release of its white paper, “Cinematic Color: From Your Monitor to the Big Screen.” The white paper, intended for computer graphic artists and software developers interested in color management, introduces techniques currently in use at major production facilities.
The document draws attention to challenges that are not covered in traditional color-management textbooks or online resources, and often passed along only by word of mouth, user forums or scripts copied between facilities.
The 54-page white paper contains text, diagrams, tables and images that address:
Canon launched its Cinema EOS product line last year with the C300; since then, it has expanded to include the EOS-1DC, C100, and C500. Each camera fills a different need and production environment, from B-cameras to documentaries to feature films.
To help you get a better idea of each camera’s features and see how they compare to each other, we’ve put together this Cinema EOS Camera Lineup chart. We’ve included information on sensor size, internal codecs, recording capabilities and more. You can click on the image below to see a larger version, or you can download a pdf version.
As bitrates increase and equipment prices drop, IP-based communication technologies – both fixed and wireless alike – are pushing more and more dedicated communication systems into retirement. The sheer amount of connectors currently found on professional video cameras make some of the advantages obvious that an IP/Ethernet-based solution brings.
Also, the number of HD SDI signals that you can squeeze into a 10GigE or even 100GigE line is a convincing argument for mediumterm migration to IP. With SMPTE 2022-6, the wrapping of all SDI formats within IP will be defined, but previous wrappings were never used to actually do anything in the IP-layer – they were just used as a transparent channel.
This article outlines the steps and required mechanisms to go “all-IP” and leverage the possibilities that come with it. The first step to take is to achieve seamless switching between signals in the IP layer, which was implemented at the IRT as a software-based Proof of Concept, followed by a novel approach regarding multicast signal distribution within a network. The applications made possible by these features are only limited by your imagination.
YouTube Space LA is a brand new, state of the art production facility in Los Angeles, CA, designed specifically for YouTube creators to produce original digital video content, from an idea through editing and uploading to YouTube.
The YouTube Space LA is a creative production facility for both established and emerging YouTube content creators who are part of the YouTube Partner Program. At the YouTube Space LA, YouTube creators can learn from industry experts, collaborate with other creators and have access to the latest production and post-production digital video equipment.
Monday, December 03, 2012
Firefox for Android has expanded its HTML5 video capabilities to include H.264 video playback. Web developers have been using Adobe Flash to play H.264 video on Firefox for Android, but Adobe no longer supports Flash for Android. Mozilla needed a new solution, so Firefox now uses Android’s “Stagefright” library to access hardware video decoders.
Supported Devices
Firefox currently supports H.264 playback on any device running Android 4.1 (Jelly Bean) and any Samsung device running Android 4.0 (Ice Cream Sandwich). We have temporarily blocked non-Samsung devices running Ice Cream Sandwich until we can fix or workaround some bugs. Support for Gingerbread and Honeycomb devices is planned for a later release.
To test whether Firefox supports H.264 on your device, try playing this Big Buck Bunny video.
Testing H.264
If your device is not supported yet, you can manually enable H.264 for testing. Enter about:config
in Firefox for Android’s address bar, then search for “stagefright”. Toggle the “stagefright.force-enabled” preference to true
. H.264 should work on most Ice Cream Sandwich devices, but Gingerbread and Honeycomb devices will probably crash.
about:config
as described above and search for “stagefright”. To force hardware video decoding, change the “media.stagefright.omxcodec.flags” preference to 16
. The default value is 0, which will try the hardware decoder and fall back to the software decoder if there are problems. The most likely problems you will encounter are videos with green lines (see below) or crashes.about:config
preferences you have changed. Log files collected from aLogcat or adb logcat are also very helpful.Between 2007 and 2009, NRK carried out a project called the “Programme Bank”, the main goal being to transform its TV production infrastructure to a fully file-based platform with an incorporated MAM system.
Today, the Norwegian broadcaster is running an Interra Systems Baton Enterprise Edition (Windows) system with 28 core licences for its automated file-based QC system.
This article describes the background to the project, the technical details of the Baton system that has been installed, along with NRK’s experiences with the system during the setup and initial phases.
BitTorrent has just posted a call for broadcast engineers to help with the building of BitTorrent Live. BitTorrent Live has been testing for some time and now they call for broadcasters to join them in these tests.
BitTorrent Live is a new peer-to-peer live streaming protocol. It allows content creators to scale their reach to audiences of millions with near-zero latencies and minimal infrastructure investment.
“Built with users, from scratch, it’s designed to take the principles of the BitTorrent protocol, and apply them to streaming,” according to BitTorrent’s call to broadcasters, “That means: no barriers to broadcast. That means: the more people who tune in, the more resilient your stream. That means: you can share video with a massive audience, in realtime – without bandwidth costs or infrastructure requirements.
“We’ve been conducting regular tests with users (props to our intrepid volunteers), and have achieved results at swarm sizes of a few thousand. Now, we’re inviting qualified broadcasters like you to help us build something amazing.”
Leveraging the lessons of the original BitTorrent protocol, BitTorrent Live has been designed from scratch as the perfect means of sharing events to the masses in real-time, but without the astronomical bandwidth requirements that traditionally constrain content creators.
Every viewer that joins a swarm extends its reach by sharing pieces of the video to other viewers. Media is delivered with stunningly low delay by utilizing a UDP Screamer protocol. Video and audio are transmitted using the industry standard H.264 and AAC codecs, providing the highest quality.
On the company’s website it says: “BitTorrent Live is still under heavy development, and as such, is available as a technology demonstration only. The ability to run a video source is not currently available to end-users. Once the protocol has been finalised we expect to allow user generation of content. The experience may not be perfect yet, but we strive daily to improve it, and welcome your input and experiences. “
By Robert Briel, Broadband TV News
It seems that the EBU (European Broadcasting Union) has decided it is time to take the bull by the horns over ultra HD.
After all, it is now six years since the first production and transmission of ultra HD was demonstrated by Japanese broadcaster NHK in Tokyo, and still we seem no closer to consensus over standardization of the critical parameters such as image format, frame rate and codec type. There are many options on the table, and convergence has been hampered by confusion over what specifications are required or desirable to deliver the ultimate quality of experience for varying screen sizes.
There are also the constraints of cost and available bandwidth, with future evolution of HD dependent not just on increased network capacity, but also improved compression ratios, which is why the emerging HEVC (High Efficiency Video Coding) is important.
Pressure on bandwidth will come not just over distribution, but also contribution, given that pictures captured for ultra HD at 3920 x 2160, at 300 fps as has been proposed as a unifying figure, would generate streams at 52Gb/s. This will certainly give cause to think again about sending uncompressed raw video as some broadcasters have been doing, and there will be renewed demand for improvements in compression at the contribution stage.
The EBU has attempted to bring order to the mounting chaos of future HD standards, clouded further by the 3-D issue, by setting up its Beyond HD group. But, realizing that it was not much use just debating the future of HD behind closed doors among its European members, the EBU has reached out to manufacturers including Sony and Panasonic, as well as non members, notably NHK itself.
It met with these three in Geneva recently, along with BSkyB, to discuss issues of harmonization, which has been made more urgent by a clear move among manufacturers to push ahead and market new TVs next year. They are all desperate for products that will raise margins after the relative failure of 3-D so far, while there is a limit to the premium they can charge for smart TVs now that Internet connectivity is almost taken as a given and is not that much of a selling point.
The EBU did well by doing its homework first and thrashing out key issues while identifying what sort of roadmap made sense for HD given the display technologies, compression algorithms and bandwidth that were likely to become available over the next decade. The first task was to define what “Beyond HD” was. For some, it begins with 1080p, since current HD services are normally either 720p or 1080i, which both represent compromises. 720p with progressive scan is optimal for sport and fast-moving action, but sacrifices resolution, which can result in sub-optimal quality for content with a lot of detail but not necessarily fast action, such as art documentaries and some nature programs.
1080i can look juddery for fast action because, with interlacing, every alternate line only changes with every second frame but gives higher picture resolution than 720p. So 1080p, with progressive scan, combines the best of both and for most people would be regarded as the pinnacle of quality at present, but is only starting to deployed.
However, the EBU decided that the industry was already on the way to 1080p, and so defined “beyond HD” as the future beyond that likely to emerge over a four to 10-year time span, starting with some variant of ultra HD at 3840 x 2160 resolution. That is double in each direction or 4X over the screen area greater than the 1920 x 1080 of 1080p.
But, this then begged the next big question, which was how much resolution is desirable, under what circumstances and how much is affordable in terms of bandwidth or investment.
The first point to note, as the EBU did, is that frame rate has to increase almost in proportion with the resolution, so the toll on bandwidth is even greater than some broadcasters will have originally anticipated.
Frame rate has to increase because as the resolution gets higher, so the jump of picture elements between successive frames becomes more perceptible. Yet, proposed deployments of 1080p are set actually to reduce the frame rate in order to avoid increasing bandwidth too much, which would make the whole exercise pointless. In fact, 1080p needs a frame rate of at least 50 fps, and ultra HD, or 4K as it is often called, will require 100 fps. Some trials have been looking at a higher frame rate of 120 fps for 4K, which immediately introduces a conversion problem if content shot at one rate then has to be displayed on TVs that support another. For this reason, there are proposals to capture video at the high rate of 300 fps, partly because this can be cut readily down to both 120 fps, by dividing by five and then multiplying by two, and also to 100 fps, dividing by 3. It would surely make more sense to standardize on 100 fps, but that remains to be seen.
The nest question is over resolution itself, with the starting point being a law called Rayleigh’s criterion, which defines the smallest distance that can be resolved by an imaging system, determined by the wavelength of the light being received and the diameter of the object lens — in this case, the pupil of the human eye. While this varies slightly according to the color content of the image and the individual concerned, it is around 1/60th of a degree. Under normal vision, the viewing angle is within 30° horizontally and rather less vertically, so doing the math, that comes to a maximum of 1800 picture elements across the width of the screen. That is just covered by 1080p with its 1920 pixels across the horizontal.
At first sight, then there seems to be no call for anything beyond 1080p at all. But, this reckons without the impending revolution in display types with huge wall-size screens well over the horizon now. It is true that the smallest angle that can be resolved depends not on the screen size but on the angle of viewing. On that basis, a giant screen viewed from 100ft does not need any more picture elements than, say, a tablet close up, although each pixel would have to be proportionately larger.
In practice, though, large screens do require more picture elements because there are situations where they may be viewed from closer up than the normal optimum distance. For example, wall-sized displays will comprise multiple smaller panels, each of which can function as independent TVs, in which case they will sometimes be viewed from closer range and require smaller picture elements than they otherwise would. Further to that, these large screens will enable immersive viewing where the horizontal viewing angle will be much greater than the current 30° limit. That is why we will need ultra HD.
It may well be, though, that we will not need to go much further, and there may never be a call for the next level up, which is 8K at 7680 x 4320. Or, certainly never beyond that for viewing on two-dimensional screens. But, even then, the bandwidth implications are considerable, and, as the EBU has pointed out, often misunderstood. Even without upping the frame rate, ultra HD at 4K generates 4X as much data as 1080p, which, in turn, is double 720p or 1080i. 8K brings another fourfold increase again, and, if combined with a frame rate of 300 fps as may come to pass in a decade or more, the bandwidth consumed would be 192X greater than current HD services. That is why the EBU talks of the dramatic financial impact of "Beyond HD," which, therefore, will have to be plotted carefully. It is true that for distribution, we have the emerging HEVC, but that will only bring an immediate 50 percent or so improvement in encoding efficiency over H.264. So, while welcome, this will merely provide mild pain relief for congested networks.
On top of that, there is scope for increasing the range of colors in line with the higher resolution, and generally the bit depth for each pixel, which would rack up bandwidth further. There is also 3-D (another subject altogether for a further blog perhaps), and then, finally, the EBU talks about “beyond stereo."
There the mind really boggles.
By Philip Hunter, Broadcast Engineering
Many broadcasters know about MXF, and they have heard of things such as MXF for Finished Programs (AS-03) and MXF for Commercials (AS-12). But, this month, I want to focus on MXF’s bigger brother, AAF.
In the middle 1990s, a joint task force was created by the SMPTE and EBU. The purpose was to address the impending flood of digital video proposals. There were a number of different, competing proposals on the table regarding compression, handling of metadata and the exchange of program material as files. Many in the industry were concerned that without a concerted effort, the market would fracture, leaving end users to sort it out. Fortunately, the task force produced a number of recommendations that later led to standards that have helped drive industry consensus about what constitutes interoperable digital video.
The remit of the task force went beyond coming up with recommendations for interoperable digital video formats, however. The final report included in its name “Harmonized Standards for the Exchange of Program Material as Bitstreams.” The group spent a significant amount of time working on something called wrappers. After spirited debate, it was decided that two classes of wrappers should be developed — one for broadcast and one for editing. The group felt that a single wrapper could not accommodate the differing needs of these two application areas. Ultimately, the wrapper for broadcast became MXF, and the wrapper for editing became AAF.
Before we delve fully into AAF, let’s talk about wrappers, what they do and why they are important.
Inside the Wrapper
When using professional digital video, specifically SDI video, the relationship between video and audio is set in the standard. SMPTE 259M, and later SMPTE 292M, for HD, not only specified how video and audio should be streamed, but they also were specific about where additional information such as subtitles and timecode should be carried. Other standards specify exactly how this additional data should be formatted. For manufacturers and users, the world was relatively simple; a stream arrived and was comprised of interleaved pieces of video and audio, along with some “essence data” such as subtitles.
But, in a file-based world, there were many possible ways to exchange the same program material contained in the SDI stream. Should you send a video file, followed by a separate audio file, followed by a data file that told how to play back the two files in sync? Should you send a single file with everything in the same file? Should the video be kept separate from the audio, or should it be mixed together, as in an SDI stream? Where do you put the all-important timecode? And, how do you relate the timecode to the video and audio to which it refers?
Those were just some of the questions that surfaced when we looked at transporting programs as bit streams. The wrapper gave us a way to describe how the video, audio, subtitles, timecode and other “essence” should be packaged together in order to be sent from one place to another. A wrapper can contain video, audio and data essence (subtitles). The concept of a wrapper as a way to organize essence is common in both MXF and AAF. This is a simplistic drawing, but it gives you the idea that the wrapper contains video, audio and data, along with identifiers that are used to keep track of each essence component.
Today’s media landscape is radically more diverse than just a few years ago. The delivery of consistently acceptable image and sound quality is taken for granted by viewers, despite uncertain or fluctuating bandwidth. Adaptive-Bit-Rate (ABR) streaming technology makes this possible.
What is ABR Streaming?
ABR streaming is a delivery technology designed to provide consistent, high-quality viewing in situations where bandwidth may fluctuate, and where viewers may be on a wide range of devices.
Prior to ABR streaming, Web or mobile video delivery was typically done by encoding a single downloadable file or stream at a fixed bit rate and frame size. Viewers could buffer some of the video, and then simultaneously download and play it back. This delivery model was similar to cable transmission, where a single bit rate is transmitted over a reliable medium.
Unfortunately, transmission mediums for Web and mobile devices are unreliable, and bandwidths vary. During fixed-rate video playback, viewers with low bandwidth suffer from excessive buffering (delaying playback). To compensate, providers have tended to encode at lower bit rates, punishing viewers with high bandwidth. Even then, any fluctuations in bandwidth can cause buffering delays.
To solve this problem, ABR streaming content is encoded into multiple layers, each potentially a different bit rate, frame size and/or frame rate. These layers are combined into a single package that represents the original content. ABR players switch between layers depending upon the device and available bandwidth, to ensure consistent high-quality playback.
For example, a single ABR package might include six layers, each encoded at progressively higher bit rates. As a viewer watches content on his/her mobile phone during a train ride, the player will adaptively switch between low bit rates and high bit rates, depending upon the connectivity of the device.
How Does it Work?
Most ABR streaming technologies use standard Web protocol (HTTP delivery) to send video. This offers advantages over specialized streaming protocols such as RTSP or RTP, as HTTP-based delivery works immediately on Internet networks and can take advantage of edge technologies designed to cache HTTP requests.
During playback, video and audio are delivered via HTTP in small fragments, each representing some small amount of video, typically between 2 and 10 seconds in length. Each content package includes multiple layers, and each layer may include many fragments. For example, an hour-long movie may have 12 layers, each with a thousand fragments. The player is provided with a package manifest file outlining which layers are available and the location of the fragments for each layer.
During playback, the player requests and downloads a fragment from a layer. While the fragment is played, the connection speed is monitored, and the player may opt to switch layers, either increasing or decreasing the video bit rate based upon the connection speed. Players may also choose layers with different frame sizes or frame rates to optimize the visual experience for the device. This adaptive behavior is what ensures consistent playback regardless of connection speed or device.
There are several different ABR streaming technologies available: Apple HTTP Live Streaming (HLS), Adobe HTTP Dynamic Streaming (HDS), Microsoft Smooth Streaming (MSS), and more recently MPEG Dynamic Adaptive Streaming over HTTP (MPEG-DASH). Each technology requires a complete ecosystem. The content must be prepared correctly, and the correct player must be used. All of the technologies work fundamentally in the same manner, using HTTP for content delivery in fragments.
Where these technologies differ is largely related to the structure of the underlying packages. For example, HLS for older versions of iOS requires a separate file for each video fragment. In contrast, most other packages store fragments for a layer in a single file, allowing the player to download fragments using HTTP byte range requests, which download a small part of a larger file.
Other differences in ABR technology relate to the viewer experience. Apple HLS, for example, provides for a dedicated key frame layer, allowing users to scrub through the video quickly. Other packages allow an audio-only stream with a poster image for extreme low-bit-rate situations.
Preparing Content
Preparing ABR content takes several steps. First, the desired packaging and layer structures need to be identified. Next, content must be encoded, checked for quality, packaged, encrypted and delivered.
Lytro, creator of the world’s first consumer light field camera, announced it will unveil a new light field technology capability for the Lytro camera with Perspective Shift as well as new creative tools with Living Filters. These features will be available to customers starting December 4th via a free Lytro Desktop software update.
Perspective Shift lets Lytro photographers interactively change the point of view in a picture after it has been taken. On a computer or mobile device, viewers can move the living picture in any direction – left, right, up, down and all around.
When pictures are shared to the web, Facebook and Twitter, friends can experience Perspective Shift without needing any special software. Perspective Shift also works retroactively on any light field pictures previously taken with a Lytro camera.
In addition to Perspective Shift, Lytro announced a new way to enhance light field pictures with Living Filters. With a single click, Lytro photographers will be able to apply one of nine interactive filters to their pictures and change the look of the picture based on light field depth. Unlike traditional digital photo filters, Living Filters create additional effects as viewers interact with a picture. Living Filters is a free update to Lytro Desktop and works on all Lytro light field pictures, including retroactively.
New Living Filters:
More information
By Don Kennedy and Ryo Osuga, DigInfo
DASH-JS has been updated to the latest version of the Media Source API and now supports ISO Base Media File Format (IBMFF)-based media segments. Now the latest version (v.23) of Google Chrome supports the Media Source API by default, enabling the playback of WebM and IBMFF media segments.
DASH-JS is a seamless integration of Dynamic Adaptive Streaming over HTTP (DASH) into the Web using the HTML5 video element. Moreover, it is based on JavaScript which uses the Media Source API of Google’s Chrome browser to present a flexible and potentially browser independent DASH player.
By Stefan Lederer, ITEC
The Blackmagic Cinema Camera is the most Apple-like camera I have ever used, in fact the only camera. Like the iPhone it stands out on the market as better than the mass market clones, at least on paper. Is it worth the wait?
When the Blackmagic Cinema Camera appeared out of nowhere back in April the first thing I thought was “finally a camera designed from scratch for filmmakers that does not cost $15,000″ and my second thought was “this is the future, better pre-order it quick”.
The short wait to a July release date now seems like a different era altogether but if ever there was a camera worth being patient over, this is it.
Despite being a niche product, at $3000 the Blackmagic Cinema Camera has Apple-like mass market potential. Apple themselves have seemingly embraced it as a flagship product for Thunderbolt and earlier this year during a major product launch the Blackmagic Cinema Camera was seen pictured on a keynote slide hooked up to new MacBook Pro via Thunderbolt. In the same way musicians now use affordable software to mix and record, cinematographers have the Blackmagic Cinema Camera.
DSLR Killer
They also have DSLRs of course, and these are readily available at a huge range of specs and price ranges. Does the Blackmagic Cinema Camera beat them on the image quality front? The short answer is yes.
Even in low light it is a performer, which surprised me. This is genuinely a mini Alexa and going back to compressed 8bit 1080p afterwards with less detail, a less organic look, blotchy noise and all those burnt highlights and crushed blacks really is shocking the first time you compare a DSLR to the Blackmagic Cinema Camera. This is a big subject and I am doing a very broad shootout in Berlin with Slashcam at the moment which will be online in a few days. I can’t cover everything at once in one review so I am spreading stuff out – low light, dynamic range, resolution – it will all become clear. I’ll round up my thoughts about image quality in this review, under the image quality section later in the post.