The Economic Impact of Lord of the Rings




Source: OnlineMBA

Digital Film Tour: Canon C300/C100/5D Mark III, Sony FS100/FS700, BMCC, RED Scarlet X




Source: ImpulseKitty Productions

Google's New VP9 Video Technology Reaches Public View

VP9, the successor to Google's VP8 video compression technology at the center of a techno-political controversy, has made its first appearance outside Google's walls.

Google has built VP9 support into Chrome, though only in an early-stage version of the browser for developers. In another change, it also added support for the new Opus audio compression technology that's got the potential to improve voice communications and music streaming on the Internet.

VP9 and Opus are codecs, technology used to encode streams of data into compressed form then decode them later, enabling efficient use of limited network or storage capacity. Peter Beverloo, a developer on Google's Chrome team, pointed out the new codec support in a blog post earlier this month.

Releasing VP9 gives Google a chance to improve the video-streaming performance and improve other aspects of VP8. That's important in competing with today's prevailing video compression technology, H.264, and with a successor called H.265 or HEVC that also has the potential to be attract broad support across the electronics and computing industry with better compression performance.

Codecs might seem an uninteresting nuts-and-bolts aspect of computing, but they actually ignite fierce debates that pit those who like H.264's convenience and quality against those who like that Google offers VP for free use.

H.264 is used in videocameras, Blu-Ray discs, YouTube, and more. But most organizations using it must pay patent royalties to a group called MPEG LA that licenses H.264-related patents on behalf of their many owners.

Google has tried to spur adoption of VP8 instead, which it's released for royalty-free use. One major area: online video built into Web pages through the HTML5 standard.

However, VP8 hasn't dented H.264's dominance, and VP8 allies failed in an attempt to specify VP8 as the way to handle online video. As a result, HTML5 video can be invoked in a standard way, but Web developers can't easily be assured that a browser can properly decode the video in question. Internet Explorer and Safari support H.264 video, Firefox and Opera support VP8 video, and Chrome supports both codecs.

Google had tried to encourage VP8 adoption by pledging in 2011 to remove H.264 support from Chrome, but it reversed course and left the support in. Mozilla, several of whose members were bitter about Google's reversal, has since begun adapting Firefox so it can use H.264 when the operating system supports it. Windows 7 and 8, Apple's OS X and iOS, and Google's Android all have H.264 support built in.

One cloud that's hung over VP8 is the possibility that others besides Google would demand royalty payments for patented technology it uses. Indeed, MPEG LA requested such organizations come forth as it considered adding VP8 licensing program, and it said last year that 12 organizations have said they have patents essential to VP8 use.

But it's been nearly two years since MPEG LA issued started seeking VP8-related patents, and the organization still hasn't offered a license.

The VP8 and VP9 codecs have their origins at On2 Technologies, a company Google acquired for $123 million. Google and assorted allies combined VP8 with the freely usable Vorbis audio codec to form a streaming-video technology called WebM.

By Stephen Shankland, CNET

Comprehensive Guide to Rigging Any Camera

An interesting free guide.
By Sareesh Sudhakaran, Wolfcrow

Making Blackmagic Cinema Camera Work for You


Two Worlds Collide: Smooth Streaming Meets Flash Player

Microsoft today announced that it is launching a preview version of a Smooth Streaming plugin for the Open Source Media Framework (OSMF) player. Developers can use Smooth Streaming capabilities in any OSMF-compliant player, as well as Adobe's own Strobe player.

"We are pleased to announce that Windows Azure Media Services team released a preview of Microsoft Smooth Streaming plugin for OSMF," wrote Cenk Dingiloglu, a program manager on the Windows Azure Media Services team, in a Microsoft IIS blog posting. He also provided a link, for developers who want to integrate the plugin, to a set of documents and licensing requirements.

In a series of meetings last Thursday on the Microsoft campus in Redmond, Washington, the Windows Azure Media Services team laid out their strategy on a number of fronts, including the extension of Smooth Streaming client software development kits (SDKs) to embedded devices, iOS devices, and player frameworks.

During one of those Microsoft-sponsored meetings, hosted by Microsoft senior technical evangelist Alex Zambelli, Dingiloglu and Mike Downey discussed the most recent addition of OSMF support, noting that Smooth Streaming shares similarities when it comes to codecs and the use of the fragmented MP4 file.

"Support for the same audio and video codecs, H.264 and AAC, respectively," said Dingiloglu, "provides the opportunity to use fMP4, leveraging the best of both the OSMF framework and the Smooth Streaming Client SDK."

The Smooth Streaming plugin will provide some key features of Smooth Streaming, such as on-demand functionality (play, pause, seek, stop), but will also use OSMF built-in API hooks to support two key features: multiple audio language switching and maximum playback quality selection.

OSMF supports late binding, based on its use of fMP4, allowing multiple languages to be accessible to the end user without requiring all possible languages' audio tracks to be multiplexed together into a single Transport Stream, the way that iOS devices require.

OSMF and a Strobe player support also provides Microsoft a way onto the Android OS platform, too, making it possible for Smooth Streaming content to reach Android-powered smartphones and tablets.

"You can build rich media experiences for Adobe Flash Player endpoints using the same back-end infrastructure you use today to target Smooth Streaming playback to other devices like Win8 store apps, browser and so on," Dingiloglu wrote in the IIS blog post.

Microsoft isn't claiming the new OSMF plugin is ready for prime time quite yet, but I was able to see a working version of Smooth Streaming within an OSMF player during last week's visit.

In fact, one of the more impressive demonstrations was that of a playlist/manifest file that contained both Adobe .f4v files and Microsoft .ism files. The OSMF player seamlessly switched between the two fMP4 file formats, allowing content owners to intermix content from either format for playback.

"As this is a preview release, you're likely to hit issues, have feature requests, or want to provide general feedback," wrote Dingiloglu. "We want to hear it all! Please use the Smooth Streaming plugin for OSMF forum thread to let us know what's working, what isn't, and how we can improve your Smooth Streaming development experience for OSMF applications."

All of this raises the question around Smooth Streaming as it relates to MPEG DASH, the ratified dynamic adaptive streaming standard. Like Adobe, which noted it will continue to develop its own HTTP Dynamic Streaming (HDS) flavor of HTTP-delivered adaptive bitrate streaming, Microsoft sees a benefit in continuing to push the envelope with Smooth Streaming.

The company made it clear that it fully supports DASH, and yet it sees Smooth Streaming as a test bed in which it can continue to innovate for major events like the Olympic Games, which served as a catalyst - over the past three Games - for a number of innovations that now find their way into both Windows Azure Media Services and DASH.

The Smooth Streaming plugin requires browsers supporting Flash Player 10.2 or higher and also requires OSMF 2.0. Microsoft provides licensing details for the Smooth Streaming plugin for interested developers.

By Tim Siglin, StreamingMedia

4K Test Sequences

As professionals in the video industry know, building the best video processing systems takes top-notch engineering and countless hours of testing a wide range of content. Ultra-high resolution 4K video, generally 3840 x 2160, is on the immediate horizon and poised to enter the mainstream. However, bringing 4K to the masses faces an obstacle: a dearth of quality test content.

Elemental decided to remedy this problem, and just in time for the holidays. Remember those classic test sequences from a couple decades ago? We picked the best of the best clips and recreated them using a RED Epic 4K camera. These clips are now available for download, in compressed and uncompressed formats.

Windows Firefox Stiffs Adobe Flash, Plays H.264 YouTube Vids

Users of the Firefox web browser on Windows can now dump Adobe Flash and still watch H.264-encoded videos online. Fresh overnight builds of Firefox 20 will now play footage found on HTML5 websites, such as YouTube and Vimeo, that use the patent-encumbered video codec - without the need for Adobe's oft-criticised plugin, which also handles H.264.

The Mozilla Foundation, which makes Firefox, slipped support for the popular video compression standard into beta-test versions of its browser by drilling into Microsoft's Media Foundation, which does the actual H.264 video decoding.

Mozilla is averse to proprietary codecs because they're typically buried under patents and require a licensing fee. By using the video support built into the operating system, the open-source browser maker can sidestep these constraints.

The codec support is not enabled by default and requires at least Windows Vista, although support isn't there for Windows 8 yet. Official Firefox 20 builds are due to be released in April 2013.

Firefox for Android 4.x already supports H.264, again using the operating system and underlying hardware to decode the video for playback. Mozilla reluctantly added the ability to play the high-definition format on Google's platform in March to compete in the mobile arena.

The organisation had hoped patent-free codecs, such as Google’s VP8, would succeed at the expense of H.264 on the web, but that hasn’t happened. Google bought VP8 in 2009 as WebM from On2 Technologies for $124.6m and released it under a royalty-free licence in May 2010.

However, H.264, which is licensed from the MPEG-LA patent pool, remains the standard for video playback for desktop web browsing and handheld devices.

As Firefox on Android gained support for the codec, Mozilla chief technology officer Brendan Eich wrote at the time: “H.264 is absolutely required right now to compete on mobile. I do not believe that we can reject H.264 content in Firefox on Android or in B2G and survive the shift to mobile.”

By Gavin Clarke, The Register

Verizon Patents Targeted Advertising Method that Determines if Viewers are Laughing, Cuddling, Sleeping or Singing

Verizon has filed a patent application for targeting ads to viewers based on information collected from infrared cameras and microphones that would be able to detect conversations, people, objects and even animals that are near a TV.

If the detection system determines that a couple is arguing, a service provider would be able to send an ad for marriage counseling to a TV or mobile device in the room. If the couple utters words that indicate they are cuddling, they would receive ads for "a romantic getaway vacation, a commercial for a contraceptive, a commercial for flowers," or commercials for romantic movies, Verizon states in the patent application.

For years, technology executives have discussed the possibility of using devices such as Microsoft's Xbox 360 Kinect cameras to target advertising and programming to viewers, taking advantage of the ability to determine whether an adult or child is viewing a program. But Verizon is looking at taking targeted advertising to a new level with its patent application, which is titled "Methods and Systems for Presenting an Advertisement Associated with an Ambient Action of a User."

Similar to the way Google targets ads to Gmail users based on the content of their emails, Verizon proposes scanning conversations of viewers that are within a "detection zone" near their TV, including telephone conversations.

"If detection facility detects one or more words spoken by a user (e.g., while talking to another user within the same room or on the telephone), advertising facility may utilize the one or more words spoken by the user to search for and/or select an advertisement associated with the one or more words," Verizon states in the patent application.

Verizon says the sensors would also be able to determine if a viewer is exercising, eating, laughing, singing, or playing a musical instrument, and target ads to viewers based on their mood. It also could use sensors to determine what type of pets or inanimate objects are in the room.

"If detection facility detects that a user is playing with a dog, advertising facility may select an advertisement associated with dogs (e.g., a dog food commercial, a flea treatment commercial, etc.)," Verizon writes in the patent application.

Several types of sensors could be linked to the targeted advertising system, including 3D imaging devices, thermographic cameras and microphones, according to the patent application.

Verizon also details how it may be able link smartphones and tablet computers that viewers are using to the detection system.

"If detection facility detects that the user is holding a mobile device, advertising facility may be configured to communicate with the mobile device to direct the mobile device to present the selected advertisement. Accordingly, not only may the selected advertisement be specifically targeted to the user, but it may also be delivered right to the user's hands," Verizon writes in the patent application.

The targeted advertising system is one of the innovations that Verizon could potentially develop through a joint innovation lab it has created with Comcast, Time Warner Cable and Bright House Networks. Earlier this month, Comcast CFO Michael Angelakis said that engineers from Comcast and Verizon have been meeting on the West Coast to work on developing products and services. The innovation lab, which is focused on developing advanced products that take advantage of cable programming and the Verizon Wireless platform and devices, was formed after Comcast and other major MSOs agreed to sell Advanced Wireless Services (AWS) spectrum to Verizon last year.

Officials at Verizon declined to comment about the patent application.

The inventors named on the patent are Verizon Solutions Engineer Brian Roberts, Manager of Convergence Platforms Anthony Lemus, Verizon Wireless Director of Product Design Michael D'Argenio and Verizon Technical Manager Don Relyea. Verizon filed the patent application in May 2011. It was published by the U.S. Patent & Trademark Office on Thursday.

By Steve Donohue, FierceCable

David Wood on High Frame Rates



Google Needs a Strategy for Video on Android Devices

Many content owners who want to get their live event-based streaming content on mobile phones and tablets quickly find out that getting it to Android devices is extremely challenging. Unlike Apple’s iOS platform, Google has yet to provide an easy way to get live video to Android devices, and to date, it hasn’t detailed any strategy for fixing the situation. Many content owners I have spoken with, as well as those who help these content owners encode and distribute their video, are now questioning why they should even continue to go through all the trouble of trying to support Android-based devices at all.

While media companies can always build an app for their event series, most do one-off events and are faced with streaming to the mobile web and reaching their audience using browsers on Android and iOS devices. When Android phones became popular, live video was supported in the mobile browser via Adobe Flash, so digital video professionals with live content to distribute were able to keep doing what they were doing on the desktop. That’s not to say that Flash was perfect; in many cases these desktop players were heavy, containing ad overlays and metadata interaction that had a major impact on the playback quality. To get better quality video playback, some people turned to RTSP delivery. Android touted RTSP as its native live video format until Android 2.3.4 came out, after which that feature no longer worked.

The most effective way to get live video to Android browsers was to make a stripped-down Flash player that didn’t demand much from the phones. Video was decoded in the software, but it would drain batteries quickly. It was imperfect, but it functioned well enough. With the introduction of Android 3.0, it looked like HLS support was going to be built-in for all future devices, and that has held true — sort of. HLS support doesn’t match the specification, and buffering is common. Industry-leading HLS implementations such as those from Cisco and Akamai Technologies will not load on Android devices, so for the most part, content owners went back to Flash. But now Flash isn’t available for new Android phones.

Right now, content owners are left in an awkward state if they want to deliver live video to Android browsers. If Flash is present, you can deliver a basic Flash video player. If it is not, you can try to deliver HLS, but the HLS manifests must either be hand-coded or created using Android-specific tools. If the HLS video can play without buffering, you’ll find that there is no way to specify the aspect ratio, so in portrait mode it looks broken. The aspect ratio problem seems to have been fixed in Android 4.1, but it will often crash if you enter video playback in landscape mode and leave in portrait. You can allow the HLS video to open and play in a separate application, but you lose the ability to communicate with the page, and exiting the video dumps users back on their home screens.

Content owners can still send the same live video to iOS devices that they could in 2009, and it will play smoothly with little buffering. Live video support for browser-based streaming within Android tablets and phones is a significant challenge with little help available from Google. And with Google still talking about removing H.264 video support in Android, many content owners are wondering why they should even try to support Android any longer.

What’s clear is that Google doesn’t have a strategy to fix the problem, and many content owners and video ecosystem vendors are frustrated. Content owners want to get their live video on as many devices and platforms as possible, and right now, getting it to Android devices is very difficult and costly. Unless Google steps in to solve the problem, don’t expect content owners to continue to try to support Android devices for live video streaming.

By Dan Rayburn, Streaming Media

Complexity in The Digital Supply Chain

Netflix launched in Denmark, Norway, Sweden, and Finland on Oct. 15th. I just returned from a trip to Europe to review the content deliveries with European studios that prepared content for this launch.

This trip reinforced for me that today’s Digital Supply Chain for the streaming video industry is awash in accidental complexity. Fortunately the incentives to fix the supply chain are beginning to emerge. Netflix needs to innovate on the supply chain so that we can effectively increase licensing spending to create an outstanding member experience. The content owning studios need to innovate on the supply chain so that they can develop an effective, permanent, and growing sales channel for digital distribution customers like Netflix. Finally, post production houses have a fantastic opportunity to pivot their businesses to eliminate this complexity for their content owning customers.

Everyone loves Star Trek because it paints a picture of a future that many of us see as fantastic and hopefully inevitable. Warp factor 5 space travel, beamed transport over global distances, and automated food replicators all bring simplicity to the mundane aspects of living and free up the characters to pursue existence on a higher plane of intellectual pursuits and exploration.

The equivalent of Star Trek for the Digital Supply Chain is an online experience for content buyers where they browse available studio content catalogs and make selections for content to license on behalf of their consumers. Once an ‘order’ is completed on this system, the materials (video, audio, timed text, artwork, meta-data) flow into retailers systems automatically and out to customers in a short and predictable amount of time, 99% of the time. Eliminating today’s supply chain complexity will allow all of us to focus on continuing to innovate with production teams to bring amazing new experiences like 3D, 4K video, and many innovations not yet invented to our customer’s homes.

We are nowhere close to this supply chain today but there are no fundamental technology barriers to building it. What I am describing is largely what www.netflix.com has been for consumers since 2007, when Netflix began streaming. If Netflix can build this experience for our customers, then conceivably the industry can collaborate to build the same thing for the supply chain. Given the level of cooperation needed, I predict it will take five to ten years to gain a shared set of motivations, standards, and engineering work to make this happen. Netflix, especially our Digital Supply Chain team, will be heavily involved due to our early scale in digital distribution.

To realize the construction of the Starship Enterprise, we need to innovate on two distinct but complementary tracks. They are:

  1. Materials quality: Video, audio, text, artwork, and descriptive meta data for all of the needed spoken languages
  2. B2B order and catalog management: Global online systems to track content orders and to curate content catalogs

Materials Quality
Netflix invested heavily in 2012 in making it easier to deliver high quality video, audio, text, art work, and meta data to Netflix. We expanded our accepted video formats to include the de facto industry standard of Apple Pro Res. We built a new team, Content Partner Operations, to engage content owners and post production houses and mentor their efforts to prepare content for Netflix.

The Content Partner Operations team also began to engage video and audio technology partners to include support for the file formats called out by the Netflix Delivery Specification in the equipment they provide to the industry to prepare and QC digital content. Throughout 2013 you will see the Netflix Delivery Specification supported by a growing list of those equipment manufacturers. Additionally the Content Partner Operations team will establish a certification process for post production houses ability to prepare content for Netflix. Content owners that are new to Netflix delivery will be able to turn any one of many post production houses certified to deliver to Netflix from all of our regions around the world.

Content owners ability to prepare content for Netflix varies considerably. Those content owners who perform the best are those who understand the lineage of all of the files they send to Netflix. Let me illustrate this ‘lineage’ reference with an example.

There is a movie available for Netflix streaming that was so magnificently filmed, it won an Oscar for Cinematography. It was filmed widescreen in a 2.20:1 aspect ratio but it was available for streaming on Netflix in a modified 4:3 aspect ratio. How can this happen? I attribute this poor customer experience to an industry wide epidemic of ‘versionitis’. After this film was produced, it was released in many formats. It was released in theaters, mastered for Blu-ray, formatted for airplane in flight viewing and formatted for the 4x3 televisions that prevailed in the era of this film. The creation of many versions of the film makes perfect sense but versioning becomes versionitis when retailers like Netflix neglect to clearly specify which version they want and when content owners don’t have a good handle on which versions they have. The first delivery made to Netflix of this film must have been derived from the 4x3 broadcast television cut. Netflix QC initially missed this problem and we put this version up for our streaming customers. We eventually realized our error and issued a re-delivery request from the content owner to receive this film in the original aspect ratio that the filmmakers intended for viewing the film. Versionitis from the initial delivery resulted in a poor customer experience and then Netflix and the content owner incurred new and unplanned spending to execute new deliveries to fix the customer experience.

Our recent trip to Europe revealed that the common theme of those studios that struggled with delivery was versionitis. They were not sure which cut of video to deliver or if those cuts of video were aligned with language subtitle files for the content. The studios that performed the best have a well established digital archive that avoids versionitis. They know the lineage of all of their video sources and those video files’ alignment with their correlated subtitle files.

There is a link between content owner revenue and content owner delivery skill. Frequently Netflix finds itself looking for opportunities to grow its streaming catalogs quickly with budget dollars that have not yet been allocated. Increasingly the Netflix deal teams are considering the effectiveness of a content owner’s delivery abilities when making those spending decisions. Simply put, content owners who can deliver quickly and without error are getting more licensing revenue from Netflix than those content owners suffering from versionitis and the resulting delivery problems.

B2B Order and Catalog Management
Today Netflix has a set of tools for managing content orders and curating our content catalogs. These tools are internal to our business and we currently engage the industry for delivery tracking through phone calls and emails containing spreadsheets of content data.

We can do a lot better than to engage the industry with spreadsheets attached to email. We will rectify this in the first half of 2013 with the release of the initial versions of our Content Partner Portal. The universal reaction to reviewing our Nordic launch with content owners was that we were showing them great data (timeliness, error rates, etc) about their deliveries but that they need to see such data much more frequently. The Content Partner Portal will allow all of these metrics to be shared in real time with content owner operations teams while the deliveries are happening. We also foresee that the Content Partner Portal will be used by the Netflix deal team to objectively assess the delivery performance of content owners when planning additional spending.

We also see a role for shared industry standards to help with delivery tracking and catalog curation. The EIDR initiative, for identifying content and versions of content, offers the potential for alignment across companies in the Digital Supply Chain. We are building the ability to label titles with EIDR into our new Content Partner Portal.

Final Thoughts
Today’s supply chain is messy and not well suited to help companies in our industry to fully embrace the rapidly growing channel of internet streaming. We are a long way from the Starship Enterprise equivalent of the Digital Supply Chain but the growing global consumer demand for internet streaming clearly provides the incentive to invest together in modernizing the supply chain.

Netflix has many initiatives underway to innovate in developing the supply chain in 2013, some of which were discussed in this post, and we look forward to continuing to collaborate with our content owning partners supply chain innovation efforts.



By Kevin McEntee, VP Digital Supply Chain, Netflix

Live Streaming of Video and Subtitles with MPEG-DASH

This presentation was made at the MPEG meeting in Shanghai, China, in October 2012, related to the input contribution M26906. It gives the details about the demonstration made during the meeting.

This demonstration showed the use of the Google Chrome browser to display synchronized video and subtitles, using the Media Source Extension draft specification and the WebVTT subtitle format. The video and DASH content was prepared using GPAC MP4Box tool.


MXF Archiving & Preservation – AS-07

At the request of the Federal Agencies/Library of Congress, the AMWA is launching a new application specification, AS-07: MXF Archiving & Preservation. This new AS is a vendor neutral sub-set of MXF for long-term archiving and preservation of moving image essence and associated materials including audio, still images, captions and metadata.

Netflix's Using the Cloud do the Heavy Lifting for Video Transcoding

For Netflix vice-president of digital supply chain Kevin McEntee, the US-based video streaming company's shift to using the cloud for transcoding its massive content library comes down to a modern take on the fable of the tortoise and the hare. Or, as he told the audience at the AWS re:Invent conference in Las Vegas last week, it's like a choice between moving a room full of people to another city by using expensive high-performance Ferraris or a fleet of somewhat more humble Toyota Priuses.

When it began its shift away from DVD rentals to Internet-based video streaming, Netflix initially employed a 'Ferrari' approach to dealing with the computationally intensive task of encoding movies and TV shows in a format that could be streamed to client devices.

In 2006/2007 when Netflix began the move to streaming, it found that the video processing technology typically employed in Hollywood centred on ensuring minimal latency: It was optimised for scenarios such as a single video editor mastering a Blu-ray image of a movie: "[It was] optimised for the expensive time of that one operator; essentially the artist sitting there doing that mastering."

"Back in 2006/2007 we hired people out of this industry and we ended up building out a data centre that [was] very Ferrari-like," McEntee said, with custom, GPU-based encoding hardware; "boxes that had custom GPUs that were built specifically to dump video very fast." It was expensive and constrained by the fixed footprint of Netflix's data centre.

However the limitations of this approach became evident in late 2008, when Netflix set out to launch new video players for PCs and Macs, and jumped onto TVs by launching a player on the Xbox in November of that year.

"There was such an amazing lack of standardisation around video streaming those days, and there still is today, that we had to create new formats for those players," McEntee said.

"And at the same time [as] we were innovating in the player space and therefore causing the need for new formats, our content team in LA was licensing more and more content, so our content library during the course of that project also doubled in size. And so we set out re-encode using the hardware farm that we had built."

Unfortunately for Netflix, the hardware didn't deal well with the load, and the company encountered frequent hardware failures. Fans on the custom GPUs being used were too small and "boxes were melting", McEntee said.

"It was really a very frustrating experience and in fact that catalogue re-encode was late and we failed. Basically we launched these players and the catalogue was not complete."

It was reflecting on this experience that caused Netflix to make the move to the Amazon Web Services cloud for transcoding. "If you jump forward a year, we had made the jump to move our transcoding farm into AWS. And we had seen the opportunity in fall 2009 for launching a video player on the [Sony PlayStation 3] so this was our first 100 per cent AWS transcode. "

"The player developers again realised they had to rely on a new format; they had to transcode the entire library," McEntee said. The new format was not finalised until three or four weeks before the launch of the new player, but Netflix was "able to spin up enough instances in EC2 to transcode the entire library in about three weeks" and managed to meet the deadline.

This is where the Ferrari versus Prius metaphor comes in. In McEntee's (somewhat elaborate) analogy, individual Ferraris offer great performance, but are expensive to buy, expensive to repair and available in limited numbers; whereas the hypothetical Prius fleet won't be quite so swift, but can be rented for a lot less than it would cost to buy Ferraris, repairs are someone else's problem and they're available in large numbers.

As McEntee explained, "By moving to the cloud while ... one encode was slower the overall throughput of the whole system was much, much faster." It's a question of "thinking horizontal, not vertical," he said, with an architecture that isn't optimised for latency but for overall throughput.

Netflix "haven't really missed deadlines" since the shift away from relying on in-house hardware for transcoding, he said. But, "even more than not missing deadlines, this change has actually created opportunities for the business."

His favourite example is from February 2010, when Apple approached Netflix about the impending iPad launch. Cupertino told Netflix it wanted the company to be part of the launch - which meant yet another video format had to be supported. Using its cloud-based approach to transcoding meant that Netflix was able to have its entire content library available for the April iPad launch.

"This is an opportunity we didn't anticipate when we set out to do the AWS project, but what we found is that having this ability to scale the whole system quickly without doing any purchasing or building out a data centre ourselves really just made the business very nimble and you really can't put a price on nimble, especially in a business that's moving as fast as Netflix," McEntee said.

Netflix's expansion into non-US territories - Canada in 2010, Latin American countries in 2011, and a number of European countries earlier this year - involved building up new content catalogues specific to each licensing territory, meaning a lot more transcoding using the cloud in order to meet fixed launch deadlines.

Netflix currently uses a media processing pipeline dubbed Matrix. Content partners such as movie studios deliver content to Netflix, with the video streaming company employing Aspera's "Direct-to-S3" service to house it in Amazon's Simple Storage Service (S3).

Netflix then uses technology from start-up eyeIO and Amazon's EC2 service to transcode the source material received from the studios into multiple formats that can be streamed to the range of devices supported by the company. The results are stored in S3, before being sent to Netflix's CDN for streaming; Netflix creates multiple versions of each movie or TV show episode to stream to devices ranging from TVs to tablets to gaming consoles. The transcoding farm uses 6000-6500 EC2 instances.

The company is currently working on a successor for Matrix, dubbed Maple. Instead of using Matrix's approach of processing an entire piece of content at once, Maple will break videos up into five-minute chunks, each of which will be processed by a separate EC2 instance. McEntee said that the advantages include being more fault-tolerant - currently a job may fail mid-way through transcoding and have to be restarted from scratch - and the ability to deliver content faster in those cases where Netflix has an agreement to begin screening content the day after it first aired.

Netflix is also working on a 'digital vault' that can house video masters and secondary assets, such as audio in different languages, that could be delivered to both its systems and those of its competitors in the video streaming space.


Visual Effects Society Releases Cinematic Color White Paper

The Visual Effects Society Technology Committee announced the release of its white paper, “Cinematic Color: From Your Monitor to the Big Screen.” The white paper, intended for computer graphic artists and software developers interested in color management, introduces techniques currently in use at major production facilities.

The document draws attention to challenges that are not covered in traditional color-management textbooks or online resources, and often passed along only by word of mouth, user forums or scripts copied between facilities.

The 54-page white paper contains text, diagrams, tables and images that address:

  • Technical issues that can arise in handling texture painting, lighting, rendering, compositing and image display in the theater.

  • Color science, color encoding and scene-referred and display-referred colorimetry; extending these concepts to their use in modern motion picture color management.

  • Recent efforts on digital color standardization in the motion picture industry (CES and CDL), and how to experiment with these concepts for free using open-source software (OpenColorIO).
The white paper was authored by Jeremy Selan and reviewed by members of the VES Technology Committee, including Rob Bredow, Dan Candela, Nick Cannon, Ray Feeney, Andy Hendrickson, Gautham Krishnamurti, Sam Richards, Jordan Soles and Sebastian Sylwan.

Please visit Cinematic Color for additional resources related to motion picture color management.

Canon Cinema EOS Lens & Camera Charts

Canon launched its Cinema EOS product line last year with the C300; since then, it has expanded to include the EOS-1DC, C100, and C500. Each camera fills a different need and production environment, from B-cameras to documentaries to feature films.

To help you get a better idea of each camera’s features and see how they compare to each other, we’ve put together this Cinema EOS Camera Lineup chart. We’ve included information on sensor size, internal codecs, recording capabilities and more. You can click on the image below to see a larger version, or you can download a pdf version.


Canon’s EF lenses have had a large following in the HDSLR world, where they’re known for producing pleasing skin tones and a nice gradation to the blacks. In more recent years, their popularity has extended into the cinema world, with many filmmakers seeking out special adapters and rigs to use the photo lenses in a production environment.

Now, Canon has created a line of Cinema EOS lenses that feature the trademark “Canon look,” but are designed specially for professional production work. So, to complement the camera chart above, we’ve also put together the Cinema EOS Lens Lineup chart. The chart includes details on mounts, minimum object distance, image circle coverage and more – click on the chart to see a larger version or download the pdf.

By Claire Orpeza, AbelCine

SDI Over IP

As bitrates increase and equipment prices drop, IP-based communication technologies – both fixed and wireless alike – are pushing more and more dedicated communication systems into retirement. The sheer amount of connectors currently found on professional video cameras make some of the advantages obvious that an IP/Ethernet-based solution brings.

Also, the number of HD SDI signals that you can squeeze into a 10GigE or even 100GigE line is a convincing argument for mediumterm migration to IP. With SMPTE 2022-6, the wrapping of all SDI formats within IP will be defined, but previous wrappings were never used to actually do anything in the IP-layer – they were just used as a transparent channel.

This article outlines the steps and required mechanisms to go “all-IP” and leverage the possibilities that come with it. The first step to take is to achieve seamless switching between signals in the IP layer, which was implemented at the IRT as a software-based Proof of Concept, followed by a novel approach regarding multicast signal distribution within a network. The applications made possible by these features are only limited by your imagination.

YouTube Space

YouTube Space LA is a brand new, state of the art production facility in Los Angeles, CA, designed specifically for YouTube creators to produce original digital video content, from an idea through editing and uploading to YouTube.

The YouTube Space LA is a creative production facility for both established and emerging YouTube content creators who are part of the YouTube Partner Program. At the YouTube Space LA, YouTube creators can learn from industry experts, collaborate with other creators and have access to the latest production and post-production digital video equipment.



There is another YouTube Space in London:

H.264 Video in Firefox for Android

Firefox for Android has expanded its HTML5 video capabilities to include H.264 video playback. Web developers have been using Adobe Flash to play H.264 video on Firefox for Android, but Adobe no longer supports Flash for Android. Mozilla needed a new solution, so Firefox now uses Android’s “Stagefright” library to access hardware video decoders.

Supported Devices
Firefox currently supports H.264 playback on any device running Android 4.1 (Jelly Bean) and any Samsung device running Android 4.0 (Ice Cream Sandwich). We have temporarily blocked non-Samsung devices running Ice Cream Sandwich until we can fix or workaround some bugs. Support for Gingerbread and Honeycomb devices is planned for a later release.

To test whether Firefox supports H.264 on your device, try playing this Big Buck Bunny video.

Testing H.264
If your device is not supported yet, you can manually enable H.264 for testing. Enter about:config in Firefox for Android’s address bar, then search for “stagefright”. Toggle the “stagefright.force-enabled” preference to true. H.264 should work on most Ice Cream Sandwich devices, but Gingerbread and Honeycomb devices will probably crash.




If Firefox does not recognize your hardware decoder, it will use a safer (but slower) software decoder. Daring users can manually enable hardware decoding. Enter about:config as described above and search for “stagefright”. To force hardware video decoding, change the “media.stagefright.omxcodec.flags” preference to 16. The default value is 0, which will try the hardware decoder and fall back to the software decoder if there are problems. The most likely problems you will encounter are videos with green lines (see below) or crashes.





Giving Feedback/Reporting Bugs
If you find any video bugs, please file a bug report here so we can fix it! Please include your device model, Android OS version, the URL of the video, and any about:config preferences you have changed. Log files collected from aLogcat or adb logcat are also very helpful.

By Chris Peterson, Mozilla Hacks

The Automated File-Based QC System at NRK

Between 2007 and 2009, NRK carried out a project called the “Programme Bank”, the main goal being to transform its TV production infrastructure to a fully file-based platform with an incorporated MAM system.

Today, the Norwegian broadcaster is running an Interra Systems Baton Enterprise Edition (Windows) system with 28 core licences for its automated file-based QC system.

This article describes the background to the project, the technical details of the Baton system that has been installed, along with NRK’s experiences with the system during the setup and initial phases.

BitTorrent to Start Testing Live P2P

BitTorrent has just posted a call for broadcast engineers to help with the building of BitTorrent Live. BitTorrent Live has been testing for some time and now they call for broadcasters to join them in these tests.

BitTorrent Live is a new peer-to-peer live streaming protocol. It allows content creators to scale their reach to audiences of millions with near-zero latencies and minimal infrastructure investment.

“Built with users, from scratch, it’s designed to take the principles of the BitTorrent protocol, and apply them to streaming,” according to BitTorrent’s call to broadcasters, “That means: no barriers to broadcast. That means: the more people who tune in, the more resilient your stream. That means: you can share video with a massive audience, in realtime – without bandwidth costs or infrastructure requirements.

“We’ve been conducting regular tests with users (props to our intrepid volunteers), and have achieved results at swarm sizes of a few thousand. Now, we’re inviting qualified broadcasters like you to help us build something amazing.”

Leveraging the lessons of the original BitTorrent protocol, BitTorrent Live has been designed from scratch as the perfect means of sharing events to the masses in real-time, but without the astronomical bandwidth requirements that traditionally constrain content creators.

Every viewer that joins a swarm extends its reach by sharing pieces of the video to other viewers. Media is delivered with stunningly low delay by utilizing a UDP Screamer protocol. Video and audio are transmitted using the industry standard H.264 and AAC codecs, providing the highest quality.

On the company’s website it says: “BitTorrent Live is still under heavy development, and as such, is available as a technology demonstration only. The ability to run a video source is not currently available to end-users. Once the protocol has been finalised we expect to allow user generation of content. The experience may not be perfect yet, but we strive daily to improve it, and welcome your input and experiences. “

By Robert Briel, Broadband TV News

EBU Crashes Heads Together Over HD

It seems that the EBU (European Broadcasting Union) has decided it is time to take the bull by the horns over ultra HD.

After all, it is now six years since the first production and transmission of ultra HD was demonstrated by Japanese broadcaster NHK in Tokyo, and still we seem no closer to consensus over standardization of the critical parameters such as image format, frame rate and codec type. There are many options on the table, and convergence has been hampered by confusion over what specifications are required or desirable to deliver the ultimate quality of experience for varying screen sizes.

There are also the constraints of cost and available bandwidth, with future evolution of HD dependent not just on increased network capacity, but also improved compression ratios, which is why the emerging HEVC (High Efficiency Video Coding) is important.

Pressure on bandwidth will come not just over distribution, but also contribution, given that pictures captured for ultra HD at 3920 x 2160, at 300 fps as has been proposed as a unifying figure, would generate streams at 52Gb/s. This will certainly give cause to think again about sending uncompressed raw video as some broadcasters have been doing, and there will be renewed demand for improvements in compression at the contribution stage.

The EBU has attempted to bring order to the mounting chaos of future HD standards, clouded further by the 3-D issue, by setting up its Beyond HD group. But, realizing that it was not much use just debating the future of HD behind closed doors among its European members, the EBU has reached out to manufacturers including Sony and Panasonic, as well as non members, notably NHK itself.

It met with these three in Geneva recently, along with BSkyB, to discuss issues of harmonization, which has been made more urgent by a clear move among manufacturers to push ahead and market new TVs next year. They are all desperate for products that will raise margins after the relative failure of 3-D so far, while there is a limit to the premium they can charge for smart TVs now that Internet connectivity is almost taken as a given and is not that much of a selling point.

The EBU did well by doing its homework first and thrashing out key issues while identifying what sort of roadmap made sense for HD given the display technologies, compression algorithms and bandwidth that were likely to become available over the next decade. The first task was to define what “Beyond HD” was. For some, it begins with 1080p, since current HD services are normally either 720p or 1080i, which both represent compromises. 720p with progressive scan is optimal for sport and fast-moving action, but sacrifices resolution, which can result in sub-optimal quality for content with a lot of detail but not necessarily fast action, such as art documentaries and some nature programs.

1080i can look juddery for fast action because, with interlacing, every alternate line only changes with every second frame but gives higher picture resolution than 720p. So 1080p, with progressive scan, combines the best of both and for most people would be regarded as the pinnacle of quality at present, but is only starting to deployed.

However, the EBU decided that the industry was already on the way to 1080p, and so defined “beyond HD” as the future beyond that likely to emerge over a four to 10-year time span, starting with some variant of ultra HD at 3840 x 2160 resolution. That is double in each direction or 4X over the screen area greater than the 1920 x 1080 of 1080p.

But, this then begged the next big question, which was how much resolution is desirable, under what circumstances and how much is affordable in terms of bandwidth or investment.

The first point to note, as the EBU did, is that frame rate has to increase almost in proportion with the resolution, so the toll on bandwidth is even greater than some broadcasters will have originally anticipated.

Frame rate has to increase because as the resolution gets higher, so the jump of picture elements between successive frames becomes more perceptible.   Yet, proposed deployments of 1080p are set actually to reduce the frame rate in order to avoid increasing bandwidth too much, which would make the whole exercise pointless. In fact, 1080p needs a frame rate of at least 50 fps, and ultra HD, or 4K as it is often called, will require 100 fps. Some trials have been looking at a higher frame rate of 120 fps for 4K, which immediately introduces a conversion problem if content shot at one rate then has to be displayed on TVs that support another. For this reason, there are proposals to capture video at the high rate of 300 fps, partly because this can be cut readily down to both 120 fps, by dividing by five and then multiplying by two, and also to 100 fps, dividing by 3. It would surely make more sense to standardize on 100 fps, but that remains to be seen.

The nest question is over resolution itself, with the starting point being a law called Rayleigh’s criterion, which defines the smallest distance that can be resolved by an imaging system, determined by the wavelength of the light being received and the diameter of the object lens — in this case, the pupil of the human eye. While this varies slightly according to the color content of the image and the individual concerned, it is around 1/60th of a degree. Under normal vision, the viewing angle is within 30° horizontally and rather less vertically, so doing the math, that comes to a maximum of 1800 picture elements across the width of the screen. That is just covered by 1080p with its 1920 pixels across the horizontal.

At first sight, then there seems to be no call for anything beyond 1080p at all. But, this reckons without the impending revolution in display types with huge wall-size screens well over the horizon now. It is true that the smallest angle that can be resolved depends not on the screen size but on the angle of viewing. On that basis, a giant screen viewed from 100ft does not need any more picture elements than, say, a tablet close up, although each pixel would have to be proportionately larger.

In practice, though, large screens do require more picture elements because there are situations where they may be viewed from closer up than the normal optimum distance. For example, wall-sized displays will comprise multiple smaller panels, each of which can function as independent TVs, in which case they will sometimes be viewed from closer range and require smaller picture elements than they otherwise would. Further to that, these large screens will enable immersive viewing where the horizontal viewing angle  will be much greater than the current 30° limit. That is why we will need ultra HD.

It may well be, though, that we will not need to go much further, and there may never be a call for the next level up, which is 8K at 7680 x 4320. Or, certainly never beyond that for viewing on two-dimensional screens. But, even then, the bandwidth implications are considerable, and, as the EBU has pointed out, often misunderstood. Even without upping the frame rate, ultra HD at 4K generates 4X as much data as 1080p, which, in turn, is double 720p or 1080i. 8K brings another fourfold increase again, and, if combined with a frame rate of 300 fps as may come to pass in a decade or more, the bandwidth consumed would be 192X greater than current HD services. That is why the EBU talks of the dramatic financial impact of "Beyond HD," which, therefore, will have to be plotted carefully. It is true that for distribution, we have the emerging HEVC, but that will only bring an immediate 50 percent or so improvement in encoding efficiency over H.264. So, while welcome, this will merely provide mild pain relief for congested networks.

On top of that, there is scope for increasing the range of colors in line with the higher resolution, and generally the bit depth for each pixel, which would rack up bandwidth further. There is also 3-D (another subject altogether for a further blog perhaps), and then, finally, the EBU talks about “beyond stereo."

There the mind really boggles.

By Philip Hunter, Broadcast Engineering

Sensor Size Comparison Chart


Click to enlarge

By Jon Fry, Creative Video

MXF and AAF

Many broadcasters know about MXF, and they have heard of things such as MXF for Finished Programs (AS-03) and MXF for Commercials (AS-12). But, this month, I want to focus on MXF’s bigger brother, AAF.

In the middle 1990s, a joint task force was created by the SMPTE and EBU. The purpose was to address the impending flood of digital video proposals. There were a number of different, competing proposals on the table regarding compression, handling of metadata and the exchange of program material as files. Many in the industry were concerned that without a concerted effort, the market would fracture, leaving end users to sort it out. Fortunately, the task force produced a number of recommendations that later led to standards that have helped drive industry consensus about what constitutes interoperable digital video.

The remit of the task force went beyond coming up with recommendations for interoperable digital video formats, however. The final report included in its name “Harmonized Standards for the Exchange of Program Material as Bitstreams.” The group spent a significant amount of time working on something called wrappers. After spirited debate, it was decided that two classes of wrappers should be developed — one for broadcast and one for editing. The group felt that a single wrapper could not accommodate the differing needs of these two application areas. Ultimately, the wrapper for broadcast became MXF, and the wrapper for editing became AAF.

Before we delve fully into AAF, let’s talk about wrappers, what they do and why they are important.

Inside the Wrapper
When using professional digital video, specifically SDI video, the relationship between video and audio is set in the standard. SMPTE 259M, and later SMPTE 292M, for HD, not only specified how video and audio should be streamed, but they also were specific about where additional information such as subtitles and timecode should be carried. Other standards specify exactly how this additional data should be formatted. For manufacturers and users, the world was relatively simple; a stream arrived and was comprised of interleaved pieces of video and audio, along with some “essence data” such as subtitles.

But, in a file-based world, there were many possible ways to exchange the same program material contained in the SDI stream. Should you send a video file, followed by a separate audio file, followed by a data file that told how to play back the two files in sync? Should you send a single file with everything in the same file? Should the video be kept separate from the audio, or should it be mixed together, as in an SDI stream? Where do you put the all-important timecode? And, how do you relate the timecode to the video and audio to which it refers?

Those were just some of the questions that surfaced when we looked at transporting programs as bit streams. The wrapper gave us a way to describe how the video, audio, subtitles, timecode and other “essence” should be packaged together in order to be sent from one place to another. A wrapper can contain video, audio and data essence (subtitles). The concept of a wrapper as a way to organize essence is common in both MXF and AAF. This is a simplistic drawing, but it gives you the idea that the wrapper contains video, audio and data, along with identifiers that are used to keep track of each essence component.


A wrapper can contain video, audio and data essence such as subtitles


The reason I say that this drawing is simplistic is that there are further definitions within MXF itself that constrain the possible arrangements of essence within the file. For example, MXF OP-Atom requires that only a single essence component be included in a file. In other words, an MXF OP-Atom file contains only a single video element or a single audio element. MXF OP-1A allows the combining of video and audio in a single wrapper. But, again, how are the essence types laid out in the file?

Some MXF files contain video as a separate entity from audio. Others contain interleaved video and audio, meaning there is a single essence file that contains a bit of video, followed by a bit of audio channel 1, followed by a bit of audio channel 2, and then back to video again. There is also room in the interleaved file for subtitles.

One last important point: MXF and AAF specifically allow someone who receives the file to understand how video, audio and essence data all relate on a timeline. Of course, this is critical to working with professional video.


The Need for AAF
If MXF contains many different possible layouts, you may be wondering why there is a need for AAF at all. The reason is fairly simple. AAF allows you to wrap up many different tracks of video, audio and data essence, and describe how these different tracks relate to each other. Think about layering or compositing in an edit application. For those not familiar with this idea, layering allows an editor to superimpose one layer of video, say an animated graphic or a window of video, on top of another base layer. Compositing is a common application in editing, but it is not common in the on-air environment, where users generally are playing back finished program material.

An AAF file could contain a base layer consisting of a video and two audio tracks, a compositing layer consisting of another video and two audio tracks, plus some editing metadata that instructs how to manipulate these two pieces of video to produce the finished result.



 
An AAF file could contain a base layer, a compositing layer and some editing metadata


Another difference between AAF and MXF is a result of what I stated above — that MXF is primarily intended to be used in an on-air environment, while AAF is generally intended to be used for editing. Therefore, it is a common user requirement that MXF files be complete and ready to play at any time. Therefore, many (but not all) MXF files contain video and audio inside the file. By contrast, it is not uncommon to find that an AAF file that contains only references to external essence. In other words, the AAF file is small and only contains metadata, including pointers to the actual essence files and other metadata with instructions regarding what to do with them.

So, why is there this fundamental difference between AAF files and MXF files? Well, imagine you are in an air playout situation. The last thing you want to find out just as you are going to air (or after you have pressed the “play” button) is that an audio or video track cannot be located on a remote storage device. Since MXF files need to play when the “play” button is pressed, it makes a lot of sense to package video and audio inside a file.

By contrast, think about a typical broadcast promo. If your station produces local promos, you might be surprised to find out how many separate video, audio, graphics, subtitle and descriptive video elements go into a simple 15-second promo. Now, imagine a half-hour, multi-camera, pre-produced news program. This edit project could have more than 1000 (several thousand, actually) individual elements associated with it. It is likely to be more efficient to organize separately the different elements going into this program on disk, perhaps using shared storage. An AAF file could then be a lightweight, metadata-only file, with pointers to the actual content stored on shared storage. This is a common way to build a professional video editor.

So, AAF and MXF may be used differently, depending on the application. For finished programming, use MXF. For an edit environment, or an environment where you need to describe the relationship between a number of separate elements, use AAF.

Lastly, AAF and MXF are interchange formats and meant to be the lowest-common-denominator to get content from one system to another. It is common or likely that inside a system, you will not find AAF or MXF (with some exceptions). Also, since these are baseline formats, it is common to find capability-adding extensions. But, these extensions could hamper interoperability.

By Brad Gilmer, Broadcast Engineering

Content Preparation for Adaptive-Bit-Rate Video

Today’s media landscape is radically more diverse than just a few years ago. The delivery of consistently acceptable image and sound quality is taken for granted by viewers, despite uncertain or fluctuating bandwidth. Adaptive-Bit-Rate (ABR) streaming technology makes this possible.

What is ABR Streaming?
ABR streaming is a delivery technology designed to provide consistent, high-quality viewing in situations where bandwidth may fluctuate, and where viewers may be on a wide range of devices.

Prior to ABR streaming, Web or mobile video delivery was typically done by encoding a single downloadable file or stream at a fixed bit rate and frame size. Viewers could buffer some of the video, and then simultaneously download and play it back. This delivery model was similar to cable transmission, where a single bit rate is transmitted over a reliable medium.

Unfortunately, transmission mediums for Web and mobile devices are unreliable, and bandwidths vary. During fixed-rate video playback, viewers with low bandwidth suffer from excessive buffering (delaying playback). To compensate, providers have tended to encode at lower bit rates, punishing viewers with high bandwidth. Even then, any fluctuations in bandwidth can cause buffering delays.

To solve this problem, ABR streaming content is encoded into multiple layers, each potentially a different bit rate, frame size and/or frame rate. These layers are combined into a single package that represents the original content. ABR players switch between layers depending upon the device and available bandwidth, to ensure consistent high-quality playback.

For example, a single ABR package might include six layers, each encoded at progressively higher bit rates. As a viewer watches content on his/her mobile phone during a train ride, the player will adaptively switch between low bit rates and high bit rates, depending upon the connectivity of the device.

How Does it Work?
Most ABR streaming technologies use standard Web protocol (HTTP delivery) to send video. This offers advantages over specialized streaming protocols such as RTSP or RTP, as HTTP-based delivery works immediately on Internet networks and can take advantage of edge technologies designed to cache HTTP requests.

During playback, video and audio are delivered via HTTP in small fragments, each representing some small amount of video, typically between 2 and 10 seconds in length. Each content package includes multiple layers, and each layer may include many fragments. For example, an hour-long movie may have 12 layers, each with a thousand fragments. The player is provided with a package manifest file outlining which layers are available and the location of the fragments for each layer.

During playback, the player requests and downloads a fragment from a layer. While the fragment is played, the connection speed is monitored, and the player may opt to switch layers, either increasing or decreasing the video bit rate based upon the connection speed. Players may also choose layers with different frame sizes or frame rates to optimize the visual experience for the device. This adaptive behavior is what ensures consistent playback regardless of connection speed or device.

There are several different ABR streaming technologies available: Apple HTTP Live Streaming (HLS), Adobe HTTP Dynamic Streaming (HDS), Microsoft Smooth Streaming (MSS), and more recently MPEG Dynamic Adaptive Streaming over HTTP (MPEG-DASH). Each technology requires a complete ecosystem. The content must be prepared correctly, and the correct player must be used. All of the technologies work fundamentally in the same manner, using HTTP for content delivery in fragments.

Where these technologies differ is largely related to the structure of the underlying packages. For example, HLS for older versions of iOS requires a separate file for each video fragment. In contrast, most other packages store fragments for a layer in a single file, allowing the player to download fragments using HTTP byte range requests, which download a small part of a larger file.

Other differences in ABR technology relate to the viewer experience. Apple HLS, for example, provides for a dedicated key frame layer, allowing users to scrub through the video quickly. Other packages allow an audio-only stream with a poster image for extreme low-bit-rate situations.

Preparing Content
Preparing ABR content takes several steps. First, the desired packaging and layer structures need to be identified. Next, content must be encoded, checked for quality, packaged, encrypted and delivered.



ABR production workflow


Choosing Packaging and Profiles
Packaging choice is generally driven by what devices must be supported. Not every device supports players for every type of ABR streaming technology. As a result, one should catalogue both the devices and the players that will be supported. The necessary packaging will naturally become apparent as a result.

The selection of optimal bit rates, frame sizes and frame resolutions will vary depending upon device types, connection types and encoding technology. Apple and Adobe provide excellent starting points with suggested profiles suitable for their ecosystems. However, practically speaking, the entire catalogue of devices, expected network connections and network costs must be considered when designing layers.

With these considerations, layer design is a balancing act between frame size, bit rate and quality. However, the actual encoding technology used may have the biggest effect upon quality. For example, one study performed by the MSU Graphics & Media Lab showed that the use of x264 encoding technology saved necessary bit rates by as much as 50 percent compared to other H.264 encoding technologies at the same quality level. As a result, it is recommended that layers be designed while performing actual encoding tests with the final encoding technology.

Most packages, however, generally contain between 16 and 24 layers. Part of layer design will require a reduction in the number of layers. It is best to select a few common native display frame sizes (such as 1080p) and then encode multiple bit rates to those frame sizes. Doing so will avoid unnecessary performance degradation on players that use software scaling (particularly important for Adobe HDS).

Encoding, Packaging, Delivery and DRM
Each layer will require that a complete H.264 stream be encoded. With 16 to 24 layers, encoding an ABR package can easily require 20 times the processing power needed for a single H.264 stream. Fortunately, highly parallelized multirate H.264 encoding technology exists that re-uses information across the different streams. When combined with GPU acceleration, today’s encoding systems can offer 10 or 20 times the speed of CPU-based systems.

When preparing for multiple devices, an important aspect of encoding is transmuxing, the ability to re-use encoded H.264 streams across multiple package types. This prevents having to re-encode the same bit rates simply to package the video differently.

With on-demand content, it is important to perform QC checks on the different video streams. QC may be performed visually or by using automated tools that measure quality across all of the streams.

On-demand content often requires user authentication and protection prior to playback, which requires Digital Rights Management (DRM). When using DRM, the video must be encrypted during packaging, typically using AES 128-bit encryption. DRM systems typically have subtle requirements for how the encryption is performed by the encoder or packager, and it is important to validate that the two are compatible.

Finally, content delivery will be performed, either as a compressed TAR file or in the native package form. Where possible, it is recommended that the entire production process (ingest, encoding, transmuxing, packaging, quality control, encryption and delivery) be combined into a single automated workflow. Manual steps will significantly slow production time and may result in errors. It is also recommended that the ABR production process be combined with non-ABR production into a single automated system. This reduces system maintenance costs, offers a single view into the overall content production for all distribution channels and allows workflow efficiencies such as unified metadata preparation and content preprocessing.

Conclusion
Preparing video for ABR streaming generally requires research up front to choose technologies and encoding profiles, and a well-integrated, accelerated encoding approach to ensure workflow efficiency. With today’s tools, it is possible to fully automate the ABR content production workflow with full integration into existing content preparation and delivery workflows.

By John Pallett, Broadcast Engineering

Lytro Unveils Perspective Shift and Living Filters

Lytro, creator of the world’s first consumer light field camera, announced it will unveil a new light field technology capability for the Lytro camera with Perspective Shift as well as new creative tools with Living Filters. These features will be available to customers starting December 4th via a free Lytro Desktop software update.

Perspective Shift lets Lytro photographers interactively change the point of view in a picture after it has been taken. On a computer or mobile device, viewers can move the living picture in any direction – left, right, up, down and all around.

When pictures are shared to the web, Facebook and Twitter, friends can experience Perspective Shift without needing any special software. Perspective Shift also works retroactively on any light field pictures previously taken with a Lytro camera.



In addition to Perspective Shift, Lytro announced a new way to enhance light field pictures with Living Filters. With a single click, Lytro photographers will be able to apply one of nine interactive filters to their pictures and change the look of the picture based on light field depth. Unlike traditional digital photo filters, Living Filters create additional effects as viewers interact with a picture. Living Filters is a free update to Lytro Desktop and works on all Lytro light field pictures, including retroactively.

New Living Filters:

  • Carnival: Twist and distort your picture as you refocus and change perspective as if you’re in a funhouse of mirrors.

  • Crayon: Add a touch of color to a monochrome version of your picture. Click to focus and add color into your scene, or change your perspective and add color back into your scene as you explore.

  • Glass: Put a sheet of virtual glass into your scene. Everything in front of where you click will be unchanged, and everything behind will appear to be behind a piece of frosted glass.

  • Line Art: Reduce your scene to a grayscale outline, seeing more detailed lines where you refocus.

  • Mosaic: Create a tiled mosaic in the out-of-focus parts of your scene as you click or change your perspective.

  • Blur+: Significantly enhance the amount of blur in the out-of-focus parts of your scene.

  • Pop: Make parts of your scene pop out with extra detail and vibrancy when those areas are clicked.

  • Film Noir: Add a moody and stylized black and white look to your pictures, with a little bit of extra detail and color where you click.

  • 8-Track: Bring back the ‘70s with this filter that adds an aged, vignetted look to your pictures. Click to un-age parts of your scene and see them come back to life.





If you’d like to know more about how Living Filters work as well as future possibilities, read this whitepaper about the science inside.

Picture galleries:

Source: Lytro

How to Pick the Best DSLR Lens




By Kevin Good, Weapons of Mass Production

NEC StarPixel Codec Delivers JPEG2000 Quality at JPEG Compression Speeds




More information

By Don Kennedy and Ryo Osuga, DigInfo

DASH-JS with ISO Base Media File Format Support

DASH-JS has been updated to the latest version of the Media Source API and now supports ISO Base Media File Format (IBMFF)-based media segments. Now the latest version (v.23) of Google Chrome supports the Media Source API by default, enabling the playback of WebM and IBMFF media segments.
DASH-JS is a seamless integration of Dynamic Adaptive Streaming over HTTP (DASH) into the Web using the HTML5 video element. Moreover, it is based on JavaScript which uses the Media Source API of Google’s Chrome browser to present a flexible and potentially browser independent DASH player.

By Stefan Lederer, ITEC

Blackmagic Cinema Camera Review – DSLR killer?

The Blackmagic Cinema Camera is the most Apple-like camera I have ever used, in fact the only camera. Like the iPhone it stands out on the market as better than the mass market clones, at least on paper. Is it worth the wait?

When the Blackmagic Cinema Camera appeared out of nowhere back in April the first thing I thought was “finally a camera designed from scratch for filmmakers that does not cost $15,000″ and my second thought was “this is the future, better pre-order it quick”.

The short wait to a July release date now seems like a different era altogether but if ever there was a camera worth being patient over, this is it.

Despite being a niche product, at $3000 the Blackmagic Cinema Camera has Apple-like mass market potential. Apple themselves have seemingly embraced it as a flagship product for Thunderbolt and earlier this year during a major product launch the Blackmagic Cinema Camera was seen pictured on a keynote slide hooked up to new MacBook Pro via Thunderbolt. In the same way musicians now use affordable software to mix and record, cinematographers have the Blackmagic Cinema Camera.


DSLR Killer
They also have DSLRs of course, and these are readily available at a huge range of specs and price ranges. Does the Blackmagic Cinema Camera beat them on the image quality front? The short answer is yes.

Even in low light it is a performer, which surprised me. This is genuinely a mini Alexa and going back to compressed 8bit 1080p afterwards with less detail, a less organic look, blotchy noise and all those burnt highlights and crushed blacks really is shocking the first time you compare a DSLR to the Blackmagic Cinema Camera. This is a big subject and I am doing a very broad shootout in Berlin with Slashcam at the moment which will be online in a few days. I can’t cover everything at once in one review so I am spreading stuff out – low light, dynamic range, resolution – it will all become clear. I’ll round up my thoughts about image quality in this review, under the image quality section later in the post.






Disruptive
The BMCC has a minimalist approach to design. I’m a big fan of simplicity.

This camera is a disruptive piece of technology that moves things forward. Hopefully the big companies will follow its lead – not in 4 years, but now.

Technology has made cost effective digital cinema cameras a reality but not many companies have actually made one!

Panasonic have served us well with the filmmaker orientated GH3 ($1299) and before that the GH2 punching well above its price tag – but there’s no escaping that these are predominantly still photography cameras, rather than dedicated filmmaking tools. Sony with the FS100 (now $4000 used) comes close to ‘prosumer’ pricing but at launch it cost twice as much as the Blackmagic Cinema Camera. GoPro meanwhile are doing a superb job in the area of extreme sports filmmaking and also with their CineForm raw codec. Canon blazed a trail with DSLR video in the consumer camera business but then seemed to backtrack massively in order to focus on maintaining huge margins on Cinema EOS products. I’m concerned their DSLRs haven’t moved forward or innovated enough for filmmakers and they will lose sales.




 
Blackmagic Cinema Camera and Sony NEX VG-900


I think the competition for this camera is the Sony FS100, Panasonic GH3, Canon 5D Mark III and Nikon D800. Aside from the ill-fated Panasonic AF100 most other cinema cameras cost a minimum of $15,000, including the ‘bargain’ battle tested Red offerings when you add up all the bits. For the 1% of filmmakers, price isn’t a factor but for the 99% of us it is. The consumer market is larger than the professional market and Blackmagic Design have taken a good business decision with this low pricing.

Jim Jannard of Red once tried to make such a camera, the aim was 4k for $4k. Blackmagic Design, a post production company from Australia until now had no camera division and no experience of making one. Yet here they are delivering a near-3K raw image for $3k where Red did not. Pretty impressive!

Grant Petty seems to be leading a very engineering orientated company at Blackmagic Design, they’re more nimble than giant corporate manufacturers and less political. Canon, Nikon and the rest of the big guys can really take a lesson from Blackmagic when it comes to cinema and video. Already Blackmagic Design have responded to demand for a Micro Four Thirds mount version, no committee meetings or 2 year lead times… They just did it.

The Apple-like nature of the Blackmagic Cinema Camera (BMCC) is in a multitude of aspects. It shoots ProRes, the native editing format of Final Cut Pro. The build quality is up to Apple standards. In fact there’s no plastic on the body at all. Rubber is used where plastic would be on a DSLR and the chassis is really tough aluminium. Apple’s other major strength is software and the software side on this product is unbelievably good. You will not be getting an application like Blackmagic DaVinci Resolve with your DSLR any time soon. Silkypix anyone? Canon EOS Utility? Blackmagic’s great strength is not just in hardware but in software. So important.

The large touch screen is more like a stand-alone monitor for a DSLR but built in. Much more practical than a tiny DSLR LCD but a shame it cannot be articulated to a different angle. It is slanted upwards for tripod use which is better than dead-on straight like the 5D Mark III’s screen but shots above eye-level are tricky. The other quirk is that on my EF Blackmagic camera I’ve had a few aperture-related bugs and the iris control system is more awkward than it should be. You hold the iris button and use the bottom back / forward playback keys to change the iris. Auto-iris is set upon a single press of the iris key, which usually gives you roughly the right exposure depending on shuttle angle, etc. But I prefer to set exposure with NDs not aperture. Why not have dedicated up / down keys instead of the one iris button, or better still a jog wheel?




 
The Blackmagic Cinema Camera eats dynamic range for breakfast


A lot has been said of the Blackmagic Cinema Camera’s ergonomics being odd… But iris quirks aside, I don’t agree it is ergonomically poor. The extra weight over a DSLR really helps the look of handheld camera work. The controls have very positive feedback so you know that camera is reacting to input. It needs a rig where a DSLR or an FS100 needs a rig – where’s the drama? The boxy form factor isn’t a problem, rather an advantage because it is malleable to individual needs, it doesn’t box you in and the camera is very compact.

The screen is crisp and the double-tap magnified focus assist shows superb quality at 1:1 pixel level. Unfortunately with the current firmware it only magnifies the centre and you can’t drag it around on the screen like you can on the Panasonic GH3. This hopefully can be changed in a firmware update. There’s also focus peaking on the press of a single physical button and that tends to work well, and stays on in the magnified focus assist too. Generally the camera lets you get on with the shoot and it saves the complexity for post. Just what I want.

The user interface is very responsive and quick to use, with no nested layers of menus to scroll over or dig down through. Despite this simplicity and Steve Jobs-like minimalism on the surface, there’s a massive amount of power and depth to it as well. There’s no way I can cover everything there is about this camera in a review. There will be books written!


Latitude and 2.5K Raw
Shooting with the Blackmagic Cinema Camera, you have to throw the DSLR rule book out of the window. Until now I’ve tended to expose more with the highlights in mind, preventing any over exposure which results in these blowing out. The Blackmagic Cinema Camera goes deep into the highlights and I expose by bringing up the blacks and if parts of the scene are over exposed I just don’t worry that much. This is an extreme example but just look at the latitude this thing has in raw…





I assumed this shot was dead. No way to bring it back. I was wrong!

You basically get the full extent of ISO from 200 to 1600 in post… And more.

Imagine over exposing a DSLR shot at ISO 1600 which should have been done at ISO 200. You’d not be able to recover that later but with this camera you can.

All this dynamic range (in post!!) and the 2.5K resolution are wonderful to have but you absolutely cannot mess around on the post side if editing raw. You need the right hardware especially the right dedicated graphics solution and tons of hard drives if you plan to archive regular work in raw format.


Editing Hardware for Raw
At the bottom of the price scale you can get away with a $150 NVidia GeForce 560 Ti 1.5GB. Resolve doesn’t run well on my 2011 MacBook Pro / ATI Radeon even though the graphics card was cutting edge in the Apple line-up only last year. ATI cards use Open CL and Resolve prefers NVidia CUDA to process video. (The solution for 2011 MacBook Pro users is to try Adobe SpeedGrade instead – this is a big subject and I will write more about this in a future article).

I’d happily shoot ProRes or Avid DNxHD on this camera because the image in 1080p is very good. Good detail, grades well from the flat film-mode profile. There’s also a Rec. 709 video gamma mode if you want punchy material straight out of the camera like on a DSLR rather than grading from a flat image profile.

The dilemma however is that 2.5K is a big step up from 1080p and you see this on a display which supports 2.5K like the Dell U2711 or new MacBook Pro Retina. It is hard to go back to old 1080p when you see this. 2.5K raw also upscales to 4K better than 1080p.

There’s no 2.5K ProRes mode so you have to transcode the 2.5K raw to 2.5K H.264 or CineForm in Resolve. Then you are left with a difficult decision – whether to dispose of the raw material or archive it. At 7GB per minute or 45 minutes per 240GB this isn’t a decision taken lightly.

If you’re a creative filmmaker doing one very important short film a year, this is less of a problem. If you’re a production house or a regular shooter churning stuff out in 1080p it makes absolutely no sense to shoot raw. But artists like Tom Lowe of Timescapes or myself love the challenges and the extra image quality is FAR more important to me – despite the slower workflow and insane space requirements.

Of course it is entirely up to you to decide which format is right for you and what kind of shooter you are. The camera gives you a choice! A lot of the fuss about both the ‘difficulty’ of shooting raw and whether it is needed should be entirely ignored. It isn’t difficult at all. You press a button, and your hard drive fills up! This situation will change as time progresses.

I need raw. I love raw. And so will a great deal many more users of this camera, especially professional photographers at the highest level who are used to a raw workflow in their stills day job and don’t want to compromise when it comes to motion. It can save your ass, speed up a shoot, save on the lighting complexity and costs, take the risk of out operating the camera and in the end save you possibly more money than it costs in hard drives!


Windows Rig for Raw
The most effective way to edit raw without throwing out your Mac or breaking the bank is to invest in a Windows PC. Just be aware that you will lose your hair after a fortnight of constant FAFFING about. Windows is not OS X. It never ‘just works’. You need to make sure drivers are updated and that the planets are aligned. If you buy a very fine Dell XPS second hand like I did only to discover it lacks built in WiFi and the operating system is in German, all I can say is – be prepared for hell. Nothing is simple in Windows land. Those who like tinkering will understand and have probably long mastered this (as I did as a PC user 6 years ago) – but those from a predominantly Mac background will likely weep several times with unrelated-to-editing system tasks before their PC DaVinci Resolve editing rig finally works properly. Or even pairs with their Bluetooth keyboard. Or read the BMCC SSD because the camera only reads Mac formatted drives and Windows… You’ve guessed it. You need a $50 driver for that! Check out Mac Drive 9 Standard – works great.

The pain is probably worth it to most. To get the same performance on a Mac Pro as I do on my $999 Windows PC, I’d have to give a hard earned $5000 to Apple… AND have the hassle of swapping out the ATI graphics cards for NVidia with CUDA support. Truly crazy situation and something Apple really need to address. Their iMac GPUs are based on mobile models and their Mac Pros are rather over priced. You really need a beefy desktop NVidia graphics card – the ones gamers use – to edit raw with Resolve on a Mac.


Resolve as an Editing Tool
DaVinci Resolve may conjure up thoughts of little understood Hollywood colourists and production hardware which wouldn’t look out of place on the bridge of USS Enterprise but it isn’t actually that complex. It even has a Final Cut Pro 7 style multi-track video timeline and editing functionality, so you can deliver the entire film from Resolve if you need to without further work in Premiere or Final Cut. I found myself very happy to use Resolve for editing raw as well as grading.

The only thing missing for me was multiple audio tracks, added plugins and features I use in Premiere, for slow motion, etc. You can simply render out the footage (with camera recorded audio) from Resolve’s timeline and fine tune the edit in the usual NLE package of your choice. With a suitable graphics card the final rendering performance in Resolve is much faster than Premiere Pro CS5.5 delivers a DSLR timeline with the CPU, almost real-time 24fps in fact.


Image Quality
Strangely, I really don’t feel this camera has had the plaudits it deserves on the image quality front yet. Why not? It is utterly incredible.

Resolution in 1080p is up to the standard of the Canon C300 which is remarkable enough for a $3000 camera yet in 2.5K raw it exceeds this camera and almost every camera on the market aside from 4K models and the 2.8K Alexa. It isn’t just 2.5K, it is good 2.5K.

Also, for 1080p delivery ProRes is a better codec than on the Canon C300 and it also records in Avid DNxHD. Apple and Avid native editing right off the camera is a far superior solution to any so-called broadcast ready MPEG codec!

All this is huge stuff!! And $3000!




 
Click to enlarge


In terms of moire and aliasing it stands up well. On this fine brick pattern for instance, no real issues. In fact I haven’t had any real-world problems yet but that isn’t to say there aren’t any. There’s some issues with micro-moire on very fine textures (very faint outlines of red or green pixels which don’t shimmer and can be removed in post) but you have to really pixel peep and it isn’t noticeable otherwise. If you remember the 5D Mark II and the way it used to give some false colour aliasing and false detail over very fine textures, this is similar but the output is giving far more detail to begin with and there’s not the same propensity to flare up in moire hell because the sensor isn’t downscaling like on a DSLR.

One of the biggest surprises I had was to have my trusty Panasonic GH2 throw up moire where the Blackmagic Cinema Camera had none. This happened whilst setting up the next test shot below. Now in real-world shooting outside of just tests and charts the GH2 hardly ever suffered from moire. I never had any real issues in all of the 2 years I was using the GH2!! It is pretty safe to assume this isn’t going to be a big problem on the Cinema Camera.



Smoother Gradation, More Film Like Image
The following shots aren’t a moire test, I plan to do one later this week on a chart with Slashcam.de.

These are to test gradation, banding, highlight fall off and colour.

Bear in mind these centre-crops are an 8bit JPEG on the web so it doesn’t really do the Blackmagic Cinema Camera proper justice but still the difference is clear.

All test shots are F2.8, ISO 800…




 
FS100 is banding hell in the highlights!


I have 2 remarks. First is that 2.5K does make a different over 1080p – much nicer resolution and a finer grain of noise. Look at the silver lens barrel bottom right and the black grip on the Fuji X100. Secondly, with 12bit raw you really can have as gentle or as steep roll off from the highs and lows as you want, all in post. Here I tried a very gentle slope, bringing the shadows up and recovering the highlight in the middle. On the FS100 shot believe it or not I did exactly the same and look what happened… It fell apart – badly. The highlight roll off is practically in 3 stages!! I wasn’t able to get it as flat or as organic looking. And I like the FS100!

The Blackmagic shot isn’t as punchy as the FS100 in this example nor is it meant to be. That punchy contrast is a shortcoming of the FS100. I graded for a low contrast, gentle roll off and the FS100 was not capable of giving me that. The Blackmagic stuff can be graded for a punchy high contrast look if you need it.

Keep in mind the above Blackmagic shot as we turn our attention to the GH2 and 5D Mark III.





Now the 5D Mark III does better than the FS100 in the highlights but the shadows are crushed and noisy. On the GH2, shadows and gradation can be weak-points and it shows pretty badly here. Trying to match it in post to the Blackmagic on this test was impossible. There’s some banding where smooth gradation should be and absolutely nothing there in the shadows despite my attempts to lift them. Exposure was identical on all shots but those blacks just were not recorded on the GH2. Look at the camera grip, it is jet black, the detail has gone. As with all the other cameras aside from the Blackmagic, highlights are also burnt on the GH2 and there’s a much harsher electronic look around the edge of objects next to the light source.

Even when converted to 8bit JPEG for the web, the Blackmagic is in a different league to the FS100 and the current cream of the crop DSLRs for handling of smooth gradation.





Now if you’re wondering what this has to do with a cinematic image, the key to this is how the eye sees. None of the distracting electronic artefacts you see in the strips of tones and shades above should be there. They’re not natural. Not cinematic. The Blackmagic Cinema Camera is more natural. When you want more contrast between bright and dark areas of the image you simply use a steeper curve in Resolve – and in doing that, the image responds beautifully. It doesn’t get a bad case of banding or weirdness and it doesn’t get burnt out.

The resulting full frame is smooth and cinematic, and looks more organic than the same shot on a DSLR.







Of course the bottom shot is from a DSLR.

When you chop all this stuff up and present it on a blog by the way, the differences are far less than when you see it on a job, or in a theatre, or on in motion on a 2.5K display.

The verdict is clear – this camera is an extremely cinematic beast.

Next I have graded two shots the same in Resolve, to reveal optimal highlight and shadow detail just before the point where the noise in the shadows got too much or the highlights began to fall apart. Unfortunately on the 5D Mark III the highlights fall apart by default in-camera before grading, so easily do they burn out. The Blackmagic Cinema Camera here is all over the 5D Mark III producing a far more life-like image as the eye would see it, tons more latitude, crisp detailed blacks and none of the extremely sudden fall off in the highs and burnt out highlights.




 
The Blackmagic shot was graded in Resolve using the film dynamic range
and the cinema camera LUT applied


How much of the dynamic range is usable? To my eye – a lot. How good is it in low light? Actually very good. From my early impressions of low light, you have to boost a DSLR to ISO 6400 just to match the black level on the Blackmagic Cinema Camera at ISO 1600. In doing that you get far less dynamic range on the DSLR and any normal-light areas or specular highlights get burnt out. Not so on the Blackmagic.



 
Low light test at the equivalent of ISO 1600 on a DSLR, shot at 360 degrees shutter (1/25), F2.0
(Click to enlarge)


Sensor Size
The Blackmagic Cinema Camera has the same field of view as the Olympus OM-D E-M5 in video mode (with IS enabled you get a small further crop on that camera). The Blackmagic is 2.3x crop. Micro Four Thirds is 2x crop and the Panasonic GH2′s multi-aspect sensor is 1.86x crop in 16:9 allowing for a wider than 4/3 ratio sensor.

Canon APS-C is 1.6x, Super 35mm and the FS100 is 1.5x and full frame is of course 1.0x, no crop factor to consider.

I was able to match the field of view on the BMCC to my 5D Mark III, Sony FS100 and Panasonic GH2 by using the Canon EF 40mm F2.8 pancake on the Blackmagic, 90mm on the 5D Mark III, 58mm on the FS100 and 45mm on the GH2. The BMCC has noticeably less of a shallow depth of field at 40mm F2.8 than a full frame sensor enjoys at 90mm F2.8. To compensate you need to shoot from further back with a longer focal length and faster aperture, it also helps to bring your subject closer to the lens.

Is this a deal breaker? No way.

This is precisely what I’ve been doing for the last 2-3 years on my Panasonic GH2. It is just that the shallow depth of field effect is more pervasive on the 5D Mark III. This can work against the image as well as for it. It isn’t always desirable to have. The Blackmagic Cinema Camera is certainly easier to focus. In low light where stopping down to F5.6 would kill the exposure on full frame, you can keep it at F2 and enjoy manageable focus.

I feel with any sensor smaller than 1″ (Nikon J1, Sony RX100) or smaller than Super 16mm you reach the point where the aesthetic drops off in a bad way. The Blackmagic Cinema Camera’s sensor is not at that point and can safely be considered ‘a large sensor camera’ in video terms.


Lenses
Here’s what I’ve been using so far on the Blackmagic Cinema Camera:
  • Tokina 11-16mm F2.8 – versitile wide angle on the Blackmagic camera, not that there are many other alternatives. Less distortion than the Sigma 8mm and faster aperture

  • Samyang 35mm F1.4 – a good standard lens, not too long. Sharp wide open and that fast aperture comes in very handy for both shallow DOF and low light

  • Canon 135mm F2.0L – not a cheap telephoto option but a very very good one

The new Canon EF STM 40mm F2.8 pancake works on the camera even though the focus ring is fly by wire. I’ve had no issues with it. Prefer the Samyang though! The pancake is far better on the 5D Mark III where you get a nice mild vignette wide open and a much more shallow depth of field.

The 11-16mm Tokina is equivalent to a 24-35mm wide angle on the 5D Mark III.



 
A wide angle isn’t a problem on the Blackmagic Cinema camera


What about the camera mount version – EF or MTF? The Canon EF mount is actually quite adaptable, you can use Contax Zeiss, Nikon F, M42, Leica R and Olympus OM. Your Canon EF and EFS lenses won’t work on the Micro Four Thirds mount without an adapter which does aperture control electronically and there’s not really a decent one of these on the market yet in my opinion. What is enticing for me about the Micro Four Thirds mount Blackmagic camera to come in 2013 is that I can use OCT 19 anamorphic LOMO lenses on it, as well as PL and Leica M, plus Canon FD glass. To use my Voigtlander 25mm F0.95 and SLR Magic 12mm F1.6 on this camera would also be great for low light.

However a Blackmagic Cinema Camera Mark II with E-mount and Super 35mm sensor would be even more desirable. The Metabones adapter for EF to E-Mount allows your Canon glass to work pretty much perfectly on E-mount. This way you have best of both worlds and no crop factor to consider over the normal cinema standard. You could use the same cinema lens on an Alexa and BMCC Mark II and the field of view would be the same.


Conclusion
12bit raw might be the headline spec but it isn’t all the camera does well.

When people didn’t have 24p, frame rate was what made the fabled cinema look. When people didn’t have large sensors, shallow DOF was suddenly the cinema look. Inundated with large sensors and shallow DOF, some might now consider a raw codec as an essential part of the cinema look. All this kind of thinking is flawed. The cinema look is about carefully balancing every single aspect and minimising the weird stuff. Get rid of the compression, get rid of the banding, the stepped fall off to highlights, increase the resolution, use a sharp lens, an intra-frame codec, 24p, the list is almost endless!

Thankfully the Blackmagic Cinema Camera does exactly that – minimising the weird stuff and making the most of the good stuff.

The Blackmagic Cinema Camera produces the most organic, least electronic looking and most cinematic image ever outside of Arri, Red and Hollywood – and it costs just $3000. Extraordinary stuff.

Like the people at Arri, the design team at Blackmagic clearly knows what makes a cinematic image. None of the DSLRs have the same organic feel and lack of digital artefacts that the Blackmagic Cinema Camera is blessed with. Why can’t they give us that better codec? Why can’t they give us less moire, a finer grain of noise, more detail? Why can’t Canon, Sony and Nikon put high bitrates and intra-frame codecs in there? The answer is that they have carved up the market into consumer / pro and stills / motion. Blackmagic are not carving up the market, rather creating a niche for themselves. It just so happens that their niche is a lot better in terms of performance than the entire mass market put together.

It isn’t without pain though. The pain of waiting. The pain of investing in hardware to edit raw on. The decisions about rigging, batteries, SSDs. The pain of learning a raw workflow, of transcoding, rendering, of getting to grips with the admittedly very intuitive DaVinci Resolve for the first time. Actually pain isn’t the right word. I’ve enjoyed all of this process, it has been an adventure – but some may not.

If you don’t have a selection of lenses to suit the sensor size or lens mount that could be a considerable extra investment too and a selection headache. It won’t be worth the artistic gain in image to some.

This camera and especially future versions have the potential to challenge the big players in cinema cameras and DSLRs – but in my view the sensor supplier should be changed in light of ‘dirty glass’ issues and CMOS updated to Super 35mm size. Aptina (supplier of the only good part of the Nikon J1 mirrorless camera) do some interesting CMOS sensors and custom designs, it would be great to see Blackmagic partner with suppliers who can help them mass produce this camera without diluting the raw power it wields in terms of the image.

If I were to sum up the Blackmagic Cinema Camera it would not be a DSLR killer or Canon C300 competitor. It is a completely new class of camera. It is a baby Arri Alexa. And there’s no higher artistic praise to bestow on a piece of camera hardware than that.


Pros
  • Cinematic overall output
  • Under the price of a ready to shoot Scarlet it beats everything for resolution & dynamic range including Canon C300
  • Film like noise grain
  • Much more latitude in the highlights than a DSLR
  • Black detail can be pulled up more than on the FS100 and DSLRs
  • Very high build quality with no plastic used at all (rubber and metal)
  • DaVinci Resolve is superb editing package and colourist’s dream
  • Responsive in-camera playback of raw
  • Responsive touch-screen and user interface
  • Thunderbolt and HD-SDI, no wobbly HDMI. Robust SSD port and card door
  • Large screen negates need to use external monitor or EVF in many situations
  • Straight forward and minimalist approach to design of both software and hardware
  • Superb battery life with external battery solution, internal battery useful to have as a back-up
  • Affordable media
  • Affordable raw editing with correct GPU on a PC
  • The camera has ‘soul’ unlike many mass produced products


Cons
  • Potentially large extra investment in lenses, hardware, etc. for some shooters
  • No built in ND filter
  • No Super 35mm sensor size
  • No HDMI port for lower end external monitor / EVF options
  • No global shutter mode, rolling shutter not the best
  • Cinema DNG raw not as space efficient as GoPro CineForm compressed raw
  • No 2.5K recording option other than raw (2400 x 1350 80Mbit Intra-frame H.264 would be nice option for those who only do minimal grading)
  • Screen not articulated (difficult to see from low angle when camera is above eye-level)
  • Narrow viewing angle of LCD panel compared to DSLRs (polarises quite easily)
  • Poor screen visibility in strong sun light
  • Electronic aperture control on EF lenses is fiddly – should be two buttons or a jog dial
  • Final packaging issues – debris inside the lens mount on some cameras shipped so far
  • Fluff and debris tends to cling to rubber on rear of camera and cannot easily be wiped clean

Source: EOSHD