The Economic Impact of Lord of the Rings




Source: OnlineMBA

Digital Film Tour: Canon C300/C100/5D Mark III, Sony FS100/FS700, BMCC, RED Scarlet X




Source: ImpulseKitty Productions

Google's New VP9 Video Technology Reaches Public View

VP9, the successor to Google's VP8 video compression technology at the center of a techno-political controversy, has made its first appearance outside Google's walls.

Google has built VP9 support into Chrome, though only in an early-stage version of the browser for developers. In another change, it also added support for the new Opus audio compression technology that's got the potential to improve voice communications and music streaming on the Internet.

VP9 and Opus are codecs, technology used to encode streams of data into compressed form then decode them later, enabling efficient use of limited network or storage capacity. Peter Beverloo, a developer on Google's Chrome team, pointed out the new codec support in a blog post earlier this month.

Releasing VP9 gives Google a chance to improve the video-streaming performance and improve other aspects of VP8. That's important in competing with today's prevailing video compression technology, H.264, and with a successor called H.265 or HEVC that also has the potential to be attract broad support across the electronics and computing industry with better compression performance.

Codecs might seem an uninteresting nuts-and-bolts aspect of computing, but they actually ignite fierce debates that pit those who like H.264's convenience and quality against those who like that Google offers VP for free use.

H.264 is used in videocameras, Blu-Ray discs, YouTube, and more. But most organizations using it must pay patent royalties to a group called MPEG LA that licenses H.264-related patents on behalf of their many owners.

Google has tried to spur adoption of VP8 instead, which it's released for royalty-free use. One major area: online video built into Web pages through the HTML5 standard.

However, VP8 hasn't dented H.264's dominance, and VP8 allies failed in an attempt to specify VP8 as the way to handle online video. As a result, HTML5 video can be invoked in a standard way, but Web developers can't easily be assured that a browser can properly decode the video in question. Internet Explorer and Safari support H.264 video, Firefox and Opera support VP8 video, and Chrome supports both codecs.

Google had tried to encourage VP8 adoption by pledging in 2011 to remove H.264 support from Chrome, but it reversed course and left the support in. Mozilla, several of whose members were bitter about Google's reversal, has since begun adapting Firefox so it can use H.264 when the operating system supports it. Windows 7 and 8, Apple's OS X and iOS, and Google's Android all have H.264 support built in.

One cloud that's hung over VP8 is the possibility that others besides Google would demand royalty payments for patented technology it uses. Indeed, MPEG LA requested such organizations come forth as it considered adding VP8 licensing program, and it said last year that 12 organizations have said they have patents essential to VP8 use.

But it's been nearly two years since MPEG LA issued started seeking VP8-related patents, and the organization still hasn't offered a license.

The VP8 and VP9 codecs have their origins at On2 Technologies, a company Google acquired for $123 million. Google and assorted allies combined VP8 with the freely usable Vorbis audio codec to form a streaming-video technology called WebM.

By Stephen Shankland, CNET

Comprehensive Guide to Rigging Any Camera

An interesting free guide.
By Sareesh Sudhakaran, Wolfcrow

Making Blackmagic Cinema Camera Work for You


Two Worlds Collide: Smooth Streaming Meets Flash Player

Microsoft today announced that it is launching a preview version of a Smooth Streaming plugin for the Open Source Media Framework (OSMF) player. Developers can use Smooth Streaming capabilities in any OSMF-compliant player, as well as Adobe's own Strobe player.

"We are pleased to announce that Windows Azure Media Services team released a preview of Microsoft Smooth Streaming plugin for OSMF," wrote Cenk Dingiloglu, a program manager on the Windows Azure Media Services team, in a Microsoft IIS blog posting. He also provided a link, for developers who want to integrate the plugin, to a set of documents and licensing requirements.

In a series of meetings last Thursday on the Microsoft campus in Redmond, Washington, the Windows Azure Media Services team laid out their strategy on a number of fronts, including the extension of Smooth Streaming client software development kits (SDKs) to embedded devices, iOS devices, and player frameworks.

During one of those Microsoft-sponsored meetings, hosted by Microsoft senior technical evangelist Alex Zambelli, Dingiloglu and Mike Downey discussed the most recent addition of OSMF support, noting that Smooth Streaming shares similarities when it comes to codecs and the use of the fragmented MP4 file.

"Support for the same audio and video codecs, H.264 and AAC, respectively," said Dingiloglu, "provides the opportunity to use fMP4, leveraging the best of both the OSMF framework and the Smooth Streaming Client SDK."

The Smooth Streaming plugin will provide some key features of Smooth Streaming, such as on-demand functionality (play, pause, seek, stop), but will also use OSMF built-in API hooks to support two key features: multiple audio language switching and maximum playback quality selection.

OSMF supports late binding, based on its use of fMP4, allowing multiple languages to be accessible to the end user without requiring all possible languages' audio tracks to be multiplexed together into a single Transport Stream, the way that iOS devices require.

OSMF and a Strobe player support also provides Microsoft a way onto the Android OS platform, too, making it possible for Smooth Streaming content to reach Android-powered smartphones and tablets.

"You can build rich media experiences for Adobe Flash Player endpoints using the same back-end infrastructure you use today to target Smooth Streaming playback to other devices like Win8 store apps, browser and so on," Dingiloglu wrote in the IIS blog post.

Microsoft isn't claiming the new OSMF plugin is ready for prime time quite yet, but I was able to see a working version of Smooth Streaming within an OSMF player during last week's visit.

In fact, one of the more impressive demonstrations was that of a playlist/manifest file that contained both Adobe .f4v files and Microsoft .ism files. The OSMF player seamlessly switched between the two fMP4 file formats, allowing content owners to intermix content from either format for playback.

"As this is a preview release, you're likely to hit issues, have feature requests, or want to provide general feedback," wrote Dingiloglu. "We want to hear it all! Please use the Smooth Streaming plugin for OSMF forum thread to let us know what's working, what isn't, and how we can improve your Smooth Streaming development experience for OSMF applications."

All of this raises the question around Smooth Streaming as it relates to MPEG DASH, the ratified dynamic adaptive streaming standard. Like Adobe, which noted it will continue to develop its own HTTP Dynamic Streaming (HDS) flavor of HTTP-delivered adaptive bitrate streaming, Microsoft sees a benefit in continuing to push the envelope with Smooth Streaming.

The company made it clear that it fully supports DASH, and yet it sees Smooth Streaming as a test bed in which it can continue to innovate for major events like the Olympic Games, which served as a catalyst - over the past three Games - for a number of innovations that now find their way into both Windows Azure Media Services and DASH.

The Smooth Streaming plugin requires browsers supporting Flash Player 10.2 or higher and also requires OSMF 2.0. Microsoft provides licensing details for the Smooth Streaming plugin for interested developers.

By Tim Siglin, StreamingMedia

4K Test Sequences

As professionals in the video industry know, building the best video processing systems takes top-notch engineering and countless hours of testing a wide range of content. Ultra-high resolution 4K video, generally 3840 x 2160, is on the immediate horizon and poised to enter the mainstream. However, bringing 4K to the masses faces an obstacle: a dearth of quality test content.

Elemental decided to remedy this problem, and just in time for the holidays. Remember those classic test sequences from a couple decades ago? We picked the best of the best clips and recreated them using a RED Epic 4K camera. These clips are now available for download, in compressed and uncompressed formats.

Windows Firefox Stiffs Adobe Flash, Plays H.264 YouTube Vids

Users of the Firefox web browser on Windows can now dump Adobe Flash and still watch H.264-encoded videos online. Fresh overnight builds of Firefox 20 will now play footage found on HTML5 websites, such as YouTube and Vimeo, that use the patent-encumbered video codec - without the need for Adobe's oft-criticised plugin, which also handles H.264.

The Mozilla Foundation, which makes Firefox, slipped support for the popular video compression standard into beta-test versions of its browser by drilling into Microsoft's Media Foundation, which does the actual H.264 video decoding.

Mozilla is averse to proprietary codecs because they're typically buried under patents and require a licensing fee. By using the video support built into the operating system, the open-source browser maker can sidestep these constraints.

The codec support is not enabled by default and requires at least Windows Vista, although support isn't there for Windows 8 yet. Official Firefox 20 builds are due to be released in April 2013.

Firefox for Android 4.x already supports H.264, again using the operating system and underlying hardware to decode the video for playback. Mozilla reluctantly added the ability to play the high-definition format on Google's platform in March to compete in the mobile arena.

The organisation had hoped patent-free codecs, such as Google’s VP8, would succeed at the expense of H.264 on the web, but that hasn’t happened. Google bought VP8 in 2009 as WebM from On2 Technologies for $124.6m and released it under a royalty-free licence in May 2010.

However, H.264, which is licensed from the MPEG-LA patent pool, remains the standard for video playback for desktop web browsing and handheld devices.

As Firefox on Android gained support for the codec, Mozilla chief technology officer Brendan Eich wrote at the time: “H.264 is absolutely required right now to compete on mobile. I do not believe that we can reject H.264 content in Firefox on Android or in B2G and survive the shift to mobile.”

By Gavin Clarke, The Register

Verizon Patents Targeted Advertising Method that Determines if Viewers are Laughing, Cuddling, Sleeping or Singing

Verizon has filed a patent application for targeting ads to viewers based on information collected from infrared cameras and microphones that would be able to detect conversations, people, objects and even animals that are near a TV.

If the detection system determines that a couple is arguing, a service provider would be able to send an ad for marriage counseling to a TV or mobile device in the room. If the couple utters words that indicate they are cuddling, they would receive ads for "a romantic getaway vacation, a commercial for a contraceptive, a commercial for flowers," or commercials for romantic movies, Verizon states in the patent application.

For years, technology executives have discussed the possibility of using devices such as Microsoft's Xbox 360 Kinect cameras to target advertising and programming to viewers, taking advantage of the ability to determine whether an adult or child is viewing a program. But Verizon is looking at taking targeted advertising to a new level with its patent application, which is titled "Methods and Systems for Presenting an Advertisement Associated with an Ambient Action of a User."

Similar to the way Google targets ads to Gmail users based on the content of their emails, Verizon proposes scanning conversations of viewers that are within a "detection zone" near their TV, including telephone conversations.

"If detection facility detects one or more words spoken by a user (e.g., while talking to another user within the same room or on the telephone), advertising facility may utilize the one or more words spoken by the user to search for and/or select an advertisement associated with the one or more words," Verizon states in the patent application.

Verizon says the sensors would also be able to determine if a viewer is exercising, eating, laughing, singing, or playing a musical instrument, and target ads to viewers based on their mood. It also could use sensors to determine what type of pets or inanimate objects are in the room.

"If detection facility detects that a user is playing with a dog, advertising facility may select an advertisement associated with dogs (e.g., a dog food commercial, a flea treatment commercial, etc.)," Verizon writes in the patent application.

Several types of sensors could be linked to the targeted advertising system, including 3D imaging devices, thermographic cameras and microphones, according to the patent application.

Verizon also details how it may be able link smartphones and tablet computers that viewers are using to the detection system.

"If detection facility detects that the user is holding a mobile device, advertising facility may be configured to communicate with the mobile device to direct the mobile device to present the selected advertisement. Accordingly, not only may the selected advertisement be specifically targeted to the user, but it may also be delivered right to the user's hands," Verizon writes in the patent application.

The targeted advertising system is one of the innovations that Verizon could potentially develop through a joint innovation lab it has created with Comcast, Time Warner Cable and Bright House Networks. Earlier this month, Comcast CFO Michael Angelakis said that engineers from Comcast and Verizon have been meeting on the West Coast to work on developing products and services. The innovation lab, which is focused on developing advanced products that take advantage of cable programming and the Verizon Wireless platform and devices, was formed after Comcast and other major MSOs agreed to sell Advanced Wireless Services (AWS) spectrum to Verizon last year.

Officials at Verizon declined to comment about the patent application.

The inventors named on the patent are Verizon Solutions Engineer Brian Roberts, Manager of Convergence Platforms Anthony Lemus, Verizon Wireless Director of Product Design Michael D'Argenio and Verizon Technical Manager Don Relyea. Verizon filed the patent application in May 2011. It was published by the U.S. Patent & Trademark Office on Thursday.

By Steve Donohue, FierceCable

David Wood on High Frame Rates



Google Needs a Strategy for Video on Android Devices

Many content owners who want to get their live event-based streaming content on mobile phones and tablets quickly find out that getting it to Android devices is extremely challenging. Unlike Apple’s iOS platform, Google has yet to provide an easy way to get live video to Android devices, and to date, it hasn’t detailed any strategy for fixing the situation. Many content owners I have spoken with, as well as those who help these content owners encode and distribute their video, are now questioning why they should even continue to go through all the trouble of trying to support Android-based devices at all.

While media companies can always build an app for their event series, most do one-off events and are faced with streaming to the mobile web and reaching their audience using browsers on Android and iOS devices. When Android phones became popular, live video was supported in the mobile browser via Adobe Flash, so digital video professionals with live content to distribute were able to keep doing what they were doing on the desktop. That’s not to say that Flash was perfect; in many cases these desktop players were heavy, containing ad overlays and metadata interaction that had a major impact on the playback quality. To get better quality video playback, some people turned to RTSP delivery. Android touted RTSP as its native live video format until Android 2.3.4 came out, after which that feature no longer worked.

The most effective way to get live video to Android browsers was to make a stripped-down Flash player that didn’t demand much from the phones. Video was decoded in the software, but it would drain batteries quickly. It was imperfect, but it functioned well enough. With the introduction of Android 3.0, it looked like HLS support was going to be built-in for all future devices, and that has held true — sort of. HLS support doesn’t match the specification, and buffering is common. Industry-leading HLS implementations such as those from Cisco and Akamai Technologies will not load on Android devices, so for the most part, content owners went back to Flash. But now Flash isn’t available for new Android phones.

Right now, content owners are left in an awkward state if they want to deliver live video to Android browsers. If Flash is present, you can deliver a basic Flash video player. If it is not, you can try to deliver HLS, but the HLS manifests must either be hand-coded or created using Android-specific tools. If the HLS video can play without buffering, you’ll find that there is no way to specify the aspect ratio, so in portrait mode it looks broken. The aspect ratio problem seems to have been fixed in Android 4.1, but it will often crash if you enter video playback in landscape mode and leave in portrait. You can allow the HLS video to open and play in a separate application, but you lose the ability to communicate with the page, and exiting the video dumps users back on their home screens.

Content owners can still send the same live video to iOS devices that they could in 2009, and it will play smoothly with little buffering. Live video support for browser-based streaming within Android tablets and phones is a significant challenge with little help available from Google. And with Google still talking about removing H.264 video support in Android, many content owners are wondering why they should even try to support Android any longer.

What’s clear is that Google doesn’t have a strategy to fix the problem, and many content owners and video ecosystem vendors are frustrated. Content owners want to get their live video on as many devices and platforms as possible, and right now, getting it to Android devices is very difficult and costly. Unless Google steps in to solve the problem, don’t expect content owners to continue to try to support Android devices for live video streaming.

By Dan Rayburn, Streaming Media

Complexity in The Digital Supply Chain

Netflix launched in Denmark, Norway, Sweden, and Finland on Oct. 15th. I just returned from a trip to Europe to review the content deliveries with European studios that prepared content for this launch.

This trip reinforced for me that today’s Digital Supply Chain for the streaming video industry is awash in accidental complexity. Fortunately the incentives to fix the supply chain are beginning to emerge. Netflix needs to innovate on the supply chain so that we can effectively increase licensing spending to create an outstanding member experience. The content owning studios need to innovate on the supply chain so that they can develop an effective, permanent, and growing sales channel for digital distribution customers like Netflix. Finally, post production houses have a fantastic opportunity to pivot their businesses to eliminate this complexity for their content owning customers.

Everyone loves Star Trek because it paints a picture of a future that many of us see as fantastic and hopefully inevitable. Warp factor 5 space travel, beamed transport over global distances, and automated food replicators all bring simplicity to the mundane aspects of living and free up the characters to pursue existence on a higher plane of intellectual pursuits and exploration.

The equivalent of Star Trek for the Digital Supply Chain is an online experience for content buyers where they browse available studio content catalogs and make selections for content to license on behalf of their consumers. Once an ‘order’ is completed on this system, the materials (video, audio, timed text, artwork, meta-data) flow into retailers systems automatically and out to customers in a short and predictable amount of time, 99% of the time. Eliminating today’s supply chain complexity will allow all of us to focus on continuing to innovate with production teams to bring amazing new experiences like 3D, 4K video, and many innovations not yet invented to our customer’s homes.

We are nowhere close to this supply chain today but there are no fundamental technology barriers to building it. What I am describing is largely what www.netflix.com has been for consumers since 2007, when Netflix began streaming. If Netflix can build this experience for our customers, then conceivably the industry can collaborate to build the same thing for the supply chain. Given the level of cooperation needed, I predict it will take five to ten years to gain a shared set of motivations, standards, and engineering work to make this happen. Netflix, especially our Digital Supply Chain team, will be heavily involved due to our early scale in digital distribution.

To realize the construction of the Starship Enterprise, we need to innovate on two distinct but complementary tracks. They are:

  1. Materials quality: Video, audio, text, artwork, and descriptive meta data for all of the needed spoken languages
  2. B2B order and catalog management: Global online systems to track content orders and to curate content catalogs

Materials Quality
Netflix invested heavily in 2012 in making it easier to deliver high quality video, audio, text, art work, and meta data to Netflix. We expanded our accepted video formats to include the de facto industry standard of Apple Pro Res. We built a new team, Content Partner Operations, to engage content owners and post production houses and mentor their efforts to prepare content for Netflix.

The Content Partner Operations team also began to engage video and audio technology partners to include support for the file formats called out by the Netflix Delivery Specification in the equipment they provide to the industry to prepare and QC digital content. Throughout 2013 you will see the Netflix Delivery Specification supported by a growing list of those equipment manufacturers. Additionally the Content Partner Operations team will establish a certification process for post production houses ability to prepare content for Netflix. Content owners that are new to Netflix delivery will be able to turn any one of many post production houses certified to deliver to Netflix from all of our regions around the world.

Content owners ability to prepare content for Netflix varies considerably. Those content owners who perform the best are those who understand the lineage of all of the files they send to Netflix. Let me illustrate this ‘lineage’ reference with an example.

There is a movie available for Netflix streaming that was so magnificently filmed, it won an Oscar for Cinematography. It was filmed widescreen in a 2.20:1 aspect ratio but it was available for streaming on Netflix in a modified 4:3 aspect ratio. How can this happen? I attribute this poor customer experience to an industry wide epidemic of ‘versionitis’. After this film was produced, it was released in many formats. It was released in theaters, mastered for Blu-ray, formatted for airplane in flight viewing and formatted for the 4x3 televisions that prevailed in the era of this film. The creation of many versions of the film makes perfect sense but versioning becomes versionitis when retailers like Netflix neglect to clearly specify which version they want and when content owners don’t have a good handle on which versions they have. The first delivery made to Netflix of this film must have been derived from the 4x3 broadcast television cut. Netflix QC initially missed this problem and we put this version up for our streaming customers. We eventually realized our error and issued a re-delivery request from the content owner to receive this film in the original aspect ratio that the filmmakers intended for viewing the film. Versionitis from the initial delivery resulted in a poor customer experience and then Netflix and the content owner incurred new and unplanned spending to execute new deliveries to fix the customer experience.

Our recent trip to Europe revealed that the common theme of those studios that struggled with delivery was versionitis. They were not sure which cut of video to deliver or if those cuts of video were aligned with language subtitle files for the content. The studios that performed the best have a well established digital archive that avoids versionitis. They know the lineage of all of their video sources and those video files’ alignment with their correlated subtitle files.

There is a link between content owner revenue and content owner delivery skill. Frequently Netflix finds itself looking for opportunities to grow its streaming catalogs quickly with budget dollars that have not yet been allocated. Increasingly the Netflix deal teams are considering the effectiveness of a content owner’s delivery abilities when making those spending decisions. Simply put, content owners who can deliver quickly and without error are getting more licensing revenue from Netflix than those content owners suffering from versionitis and the resulting delivery problems.

B2B Order and Catalog Management
Today Netflix has a set of tools for managing content orders and curating our content catalogs. These tools are internal to our business and we currently engage the industry for delivery tracking through phone calls and emails containing spreadsheets of content data.

We can do a lot better than to engage the industry with spreadsheets attached to email. We will rectify this in the first half of 2013 with the release of the initial versions of our Content Partner Portal. The universal reaction to reviewing our Nordic launch with content owners was that we were showing them great data (timeliness, error rates, etc) about their deliveries but that they need to see such data much more frequently. The Content Partner Portal will allow all of these metrics to be shared in real time with content owner operations teams while the deliveries are happening. We also foresee that the Content Partner Portal will be used by the Netflix deal team to objectively assess the delivery performance of content owners when planning additional spending.

We also see a role for shared industry standards to help with delivery tracking and catalog curation. The EIDR initiative, for identifying content and versions of content, offers the potential for alignment across companies in the Digital Supply Chain. We are building the ability to label titles with EIDR into our new Content Partner Portal.

Final Thoughts
Today’s supply chain is messy and not well suited to help companies in our industry to fully embrace the rapidly growing channel of internet streaming. We are a long way from the Starship Enterprise equivalent of the Digital Supply Chain but the growing global consumer demand for internet streaming clearly provides the incentive to invest together in modernizing the supply chain.

Netflix has many initiatives underway to innovate in developing the supply chain in 2013, some of which were discussed in this post, and we look forward to continuing to collaborate with our content owning partners supply chain innovation efforts.



By Kevin McEntee, VP Digital Supply Chain, Netflix

Live Streaming of Video and Subtitles with MPEG-DASH

This presentation was made at the MPEG meeting in Shanghai, China, in October 2012, related to the input contribution M26906. It gives the details about the demonstration made during the meeting.

This demonstration showed the use of the Google Chrome browser to display synchronized video and subtitles, using the Media Source Extension draft specification and the WebVTT subtitle format. The video and DASH content was prepared using GPAC MP4Box tool.


MXF Archiving & Preservation – AS-07

At the request of the Federal Agencies/Library of Congress, the AMWA is launching a new application specification, AS-07: MXF Archiving & Preservation. This new AS is a vendor neutral sub-set of MXF for long-term archiving and preservation of moving image essence and associated materials including audio, still images, captions and metadata.

Netflix's Using the Cloud do the Heavy Lifting for Video Transcoding

For Netflix vice-president of digital supply chain Kevin McEntee, the US-based video streaming company's shift to using the cloud for transcoding its massive content library comes down to a modern take on the fable of the tortoise and the hare. Or, as he told the audience at the AWS re:Invent conference in Las Vegas last week, it's like a choice between moving a room full of people to another city by using expensive high-performance Ferraris or a fleet of somewhat more humble Toyota Priuses.

When it began its shift away from DVD rentals to Internet-based video streaming, Netflix initially employed a 'Ferrari' approach to dealing with the computationally intensive task of encoding movies and TV shows in a format that could be streamed to client devices.

In 2006/2007 when Netflix began the move to streaming, it found that the video processing technology typically employed in Hollywood centred on ensuring minimal latency: It was optimised for scenarios such as a single video editor mastering a Blu-ray image of a movie: "[It was] optimised for the expensive time of that one operator; essentially the artist sitting there doing that mastering."

"Back in 2006/2007 we hired people out of this industry and we ended up building out a data centre that [was] very Ferrari-like," McEntee said, with custom, GPU-based encoding hardware; "boxes that had custom GPUs that were built specifically to dump video very fast." It was expensive and constrained by the fixed footprint of Netflix's data centre.

However the limitations of this approach became evident in late 2008, when Netflix set out to launch new video players for PCs and Macs, and jumped onto TVs by launching a player on the Xbox in November of that year.

"There was such an amazing lack of standardisation around video streaming those days, and there still is today, that we had to create new formats for those players," McEntee said.

"And at the same time [as] we were innovating in the player space and therefore causing the need for new formats, our content team in LA was licensing more and more content, so our content library during the course of that project also doubled in size. And so we set out re-encode using the hardware farm that we had built."

Unfortunately for Netflix, the hardware didn't deal well with the load, and the company encountered frequent hardware failures. Fans on the custom GPUs being used were too small and "boxes were melting", McEntee said.

"It was really a very frustrating experience and in fact that catalogue re-encode was late and we failed. Basically we launched these players and the catalogue was not complete."

It was reflecting on this experience that caused Netflix to make the move to the Amazon Web Services cloud for transcoding. "If you jump forward a year, we had made the jump to move our transcoding farm into AWS. And we had seen the opportunity in fall 2009 for launching a video player on the [Sony PlayStation 3] so this was our first 100 per cent AWS transcode. "

"The player developers again realised they had to rely on a new format; they had to transcode the entire library," McEntee said. The new format was not finalised until three or four weeks before the launch of the new player, but Netflix was "able to spin up enough instances in EC2 to transcode the entire library in about three weeks" and managed to meet the deadline.

This is where the Ferrari versus Prius metaphor comes in. In McEntee's (somewhat elaborate) analogy, individual Ferraris offer great performance, but are expensive to buy, expensive to repair and available in limited numbers; whereas the hypothetical Prius fleet won't be quite so swift, but can be rented for a lot less than it would cost to buy Ferraris, repairs are someone else's problem and they're available in large numbers.

As McEntee explained, "By moving to the cloud while ... one encode was slower the overall throughput of the whole system was much, much faster." It's a question of "thinking horizontal, not vertical," he said, with an architecture that isn't optimised for latency but for overall throughput.

Netflix "haven't really missed deadlines" since the shift away from relying on in-house hardware for transcoding, he said. But, "even more than not missing deadlines, this change has actually created opportunities for the business."

His favourite example is from February 2010, when Apple approached Netflix about the impending iPad launch. Cupertino told Netflix it wanted the company to be part of the launch - which meant yet another video format had to be supported. Using its cloud-based approach to transcoding meant that Netflix was able to have its entire content library available for the April iPad launch.

"This is an opportunity we didn't anticipate when we set out to do the AWS project, but what we found is that having this ability to scale the whole system quickly without doing any purchasing or building out a data centre ourselves really just made the business very nimble and you really can't put a price on nimble, especially in a business that's moving as fast as Netflix," McEntee said.

Netflix's expansion into non-US territories - Canada in 2010, Latin American countries in 2011, and a number of European countries earlier this year - involved building up new content catalogues specific to each licensing territory, meaning a lot more transcoding using the cloud in order to meet fixed launch deadlines.

Netflix currently uses a media processing pipeline dubbed Matrix. Content partners such as movie studios deliver content to Netflix, with the video streaming company employing Aspera's "Direct-to-S3" service to house it in Amazon's Simple Storage Service (S3).

Netflix then uses technology from start-up eyeIO and Amazon's EC2 service to transcode the source material received from the studios into multiple formats that can be streamed to the range of devices supported by the company. The results are stored in S3, before being sent to Netflix's CDN for streaming; Netflix creates multiple versions of each movie or TV show episode to stream to devices ranging from TVs to tablets to gaming consoles. The transcoding farm uses 6000-6500 EC2 instances.

The company is currently working on a successor for Matrix, dubbed Maple. Instead of using Matrix's approach of processing an entire piece of content at once, Maple will break videos up into five-minute chunks, each of which will be processed by a separate EC2 instance. McEntee said that the advantages include being more fault-tolerant - currently a job may fail mid-way through transcoding and have to be restarted from scratch - and the ability to deliver content faster in those cases where Netflix has an agreement to begin screening content the day after it first aired.

Netflix is also working on a 'digital vault' that can house video masters and secondary assets, such as audio in different languages, that could be delivered to both its systems and those of its competitors in the video streaming space.


Visual Effects Society Releases Cinematic Color White Paper

The Visual Effects Society Technology Committee announced the release of its white paper, “Cinematic Color: From Your Monitor to the Big Screen.” The white paper, intended for computer graphic artists and software developers interested in color management, introduces techniques currently in use at major production facilities.

The document draws attention to challenges that are not covered in traditional color-management textbooks or online resources, and often passed along only by word of mouth, user forums or scripts copied between facilities.

The 54-page white paper contains text, diagrams, tables and images that address:

  • Technical issues that can arise in handling texture painting, lighting, rendering, compositing and image display in the theater.

  • Color science, color encoding and scene-referred and display-referred colorimetry; extending these concepts to their use in modern motion picture color management.

  • Recent efforts on digital color standardization in the motion picture industry (CES and CDL), and how to experiment with these concepts for free using open-source software (OpenColorIO).
The white paper was authored by Jeremy Selan and reviewed by members of the VES Technology Committee, including Rob Bredow, Dan Candela, Nick Cannon, Ray Feeney, Andy Hendrickson, Gautham Krishnamurti, Sam Richards, Jordan Soles and Sebastian Sylwan.

Please visit Cinematic Color for additional resources related to motion picture color management.

Canon Cinema EOS Lens & Camera Charts

Canon launched its Cinema EOS product line last year with the C300; since then, it has expanded to include the EOS-1DC, C100, and C500. Each camera fills a different need and production environment, from B-cameras to documentaries to feature films.

To help you get a better idea of each camera’s features and see how they compare to each other, we’ve put together this Cinema EOS Camera Lineup chart. We’ve included information on sensor size, internal codecs, recording capabilities and more. You can click on the image below to see a larger version, or you can download a pdf version.


Canon’s EF lenses have had a large following in the HDSLR world, where they’re known for producing pleasing skin tones and a nice gradation to the blacks. In more recent years, their popularity has extended into the cinema world, with many filmmakers seeking out special adapters and rigs to use the photo lenses in a production environment.

Now, Canon has created a line of Cinema EOS lenses that feature the trademark “Canon look,” but are designed specially for professional production work. So, to complement the camera chart above, we’ve also put together the Cinema EOS Lens Lineup chart. The chart includes details on mounts, minimum object distance, image circle coverage and more – click on the chart to see a larger version or download the pdf.

By Claire Orpeza, AbelCine

SDI Over IP

As bitrates increase and equipment prices drop, IP-based communication technologies – both fixed and wireless alike – are pushing more and more dedicated communication systems into retirement. The sheer amount of connectors currently found on professional video cameras make some of the advantages obvious that an IP/Ethernet-based solution brings.

Also, the number of HD SDI signals that you can squeeze into a 10GigE or even 100GigE line is a convincing argument for mediumterm migration to IP. With SMPTE 2022-6, the wrapping of all SDI formats within IP will be defined, but previous wrappings were never used to actually do anything in the IP-layer – they were just used as a transparent channel.

This article outlines the steps and required mechanisms to go “all-IP” and leverage the possibilities that come with it. The first step to take is to achieve seamless switching between signals in the IP layer, which was implemented at the IRT as a software-based Proof of Concept, followed by a novel approach regarding multicast signal distribution within a network. The applications made possible by these features are only limited by your imagination.

YouTube Space

YouTube Space LA is a brand new, state of the art production facility in Los Angeles, CA, designed specifically for YouTube creators to produce original digital video content, from an idea through editing and uploading to YouTube.

The YouTube Space LA is a creative production facility for both established and emerging YouTube content creators who are part of the YouTube Partner Program. At the YouTube Space LA, YouTube creators can learn from industry experts, collaborate with other creators and have access to the latest production and post-production digital video equipment.



There is another YouTube Space in London: