H.264 Video in Firefox for Android

Firefox for Android has expanded its HTML5 video capabilities to include H.264 video playback. Web developers have been using Adobe Flash to play H.264 video on Firefox for Android, but Adobe no longer supports Flash for Android. Mozilla needed a new solution, so Firefox now uses Android’s “Stagefright” library to access hardware video decoders.

Supported Devices
Firefox currently supports H.264 playback on any device running Android 4.1 (Jelly Bean) and any Samsung device running Android 4.0 (Ice Cream Sandwich). We have temporarily blocked non-Samsung devices running Ice Cream Sandwich until we can fix or workaround some bugs. Support for Gingerbread and Honeycomb devices is planned for a later release.

To test whether Firefox supports H.264 on your device, try playing this Big Buck Bunny video.

Testing H.264
If your device is not supported yet, you can manually enable H.264 for testing. Enter about:config in Firefox for Android’s address bar, then search for “stagefright”. Toggle the “stagefright.force-enabled” preference to true. H.264 should work on most Ice Cream Sandwich devices, but Gingerbread and Honeycomb devices will probably crash.




If Firefox does not recognize your hardware decoder, it will use a safer (but slower) software decoder. Daring users can manually enable hardware decoding. Enter about:config as described above and search for “stagefright”. To force hardware video decoding, change the “media.stagefright.omxcodec.flags” preference to 16. The default value is 0, which will try the hardware decoder and fall back to the software decoder if there are problems. The most likely problems you will encounter are videos with green lines (see below) or crashes.





Giving Feedback/Reporting Bugs
If you find any video bugs, please file a bug report here so we can fix it! Please include your device model, Android OS version, the URL of the video, and any about:config preferences you have changed. Log files collected from aLogcat or adb logcat are also very helpful.

By Chris Peterson, Mozilla Hacks

The Automated File-Based QC System at NRK

Between 2007 and 2009, NRK carried out a project called the “Programme Bank”, the main goal being to transform its TV production infrastructure to a fully file-based platform with an incorporated MAM system.

Today, the Norwegian broadcaster is running an Interra Systems Baton Enterprise Edition (Windows) system with 28 core licences for its automated file-based QC system.

This article describes the background to the project, the technical details of the Baton system that has been installed, along with NRK’s experiences with the system during the setup and initial phases.

BitTorrent to Start Testing Live P2P

BitTorrent has just posted a call for broadcast engineers to help with the building of BitTorrent Live. BitTorrent Live has been testing for some time and now they call for broadcasters to join them in these tests.

BitTorrent Live is a new peer-to-peer live streaming protocol. It allows content creators to scale their reach to audiences of millions with near-zero latencies and minimal infrastructure investment.

“Built with users, from scratch, it’s designed to take the principles of the BitTorrent protocol, and apply them to streaming,” according to BitTorrent’s call to broadcasters, “That means: no barriers to broadcast. That means: the more people who tune in, the more resilient your stream. That means: you can share video with a massive audience, in realtime – without bandwidth costs or infrastructure requirements.

“We’ve been conducting regular tests with users (props to our intrepid volunteers), and have achieved results at swarm sizes of a few thousand. Now, we’re inviting qualified broadcasters like you to help us build something amazing.”

Leveraging the lessons of the original BitTorrent protocol, BitTorrent Live has been designed from scratch as the perfect means of sharing events to the masses in real-time, but without the astronomical bandwidth requirements that traditionally constrain content creators.

Every viewer that joins a swarm extends its reach by sharing pieces of the video to other viewers. Media is delivered with stunningly low delay by utilizing a UDP Screamer protocol. Video and audio are transmitted using the industry standard H.264 and AAC codecs, providing the highest quality.

On the company’s website it says: “BitTorrent Live is still under heavy development, and as such, is available as a technology demonstration only. The ability to run a video source is not currently available to end-users. Once the protocol has been finalised we expect to allow user generation of content. The experience may not be perfect yet, but we strive daily to improve it, and welcome your input and experiences. “

By Robert Briel, Broadband TV News

EBU Crashes Heads Together Over HD

It seems that the EBU (European Broadcasting Union) has decided it is time to take the bull by the horns over ultra HD.

After all, it is now six years since the first production and transmission of ultra HD was demonstrated by Japanese broadcaster NHK in Tokyo, and still we seem no closer to consensus over standardization of the critical parameters such as image format, frame rate and codec type. There are many options on the table, and convergence has been hampered by confusion over what specifications are required or desirable to deliver the ultimate quality of experience for varying screen sizes.

There are also the constraints of cost and available bandwidth, with future evolution of HD dependent not just on increased network capacity, but also improved compression ratios, which is why the emerging HEVC (High Efficiency Video Coding) is important.

Pressure on bandwidth will come not just over distribution, but also contribution, given that pictures captured for ultra HD at 3920 x 2160, at 300 fps as has been proposed as a unifying figure, would generate streams at 52Gb/s. This will certainly give cause to think again about sending uncompressed raw video as some broadcasters have been doing, and there will be renewed demand for improvements in compression at the contribution stage.

The EBU has attempted to bring order to the mounting chaos of future HD standards, clouded further by the 3-D issue, by setting up its Beyond HD group. But, realizing that it was not much use just debating the future of HD behind closed doors among its European members, the EBU has reached out to manufacturers including Sony and Panasonic, as well as non members, notably NHK itself.

It met with these three in Geneva recently, along with BSkyB, to discuss issues of harmonization, which has been made more urgent by a clear move among manufacturers to push ahead and market new TVs next year. They are all desperate for products that will raise margins after the relative failure of 3-D so far, while there is a limit to the premium they can charge for smart TVs now that Internet connectivity is almost taken as a given and is not that much of a selling point.

The EBU did well by doing its homework first and thrashing out key issues while identifying what sort of roadmap made sense for HD given the display technologies, compression algorithms and bandwidth that were likely to become available over the next decade. The first task was to define what “Beyond HD” was. For some, it begins with 1080p, since current HD services are normally either 720p or 1080i, which both represent compromises. 720p with progressive scan is optimal for sport and fast-moving action, but sacrifices resolution, which can result in sub-optimal quality for content with a lot of detail but not necessarily fast action, such as art documentaries and some nature programs.

1080i can look juddery for fast action because, with interlacing, every alternate line only changes with every second frame but gives higher picture resolution than 720p. So 1080p, with progressive scan, combines the best of both and for most people would be regarded as the pinnacle of quality at present, but is only starting to deployed.

However, the EBU decided that the industry was already on the way to 1080p, and so defined “beyond HD” as the future beyond that likely to emerge over a four to 10-year time span, starting with some variant of ultra HD at 3840 x 2160 resolution. That is double in each direction or 4X over the screen area greater than the 1920 x 1080 of 1080p.

But, this then begged the next big question, which was how much resolution is desirable, under what circumstances and how much is affordable in terms of bandwidth or investment.

The first point to note, as the EBU did, is that frame rate has to increase almost in proportion with the resolution, so the toll on bandwidth is even greater than some broadcasters will have originally anticipated.

Frame rate has to increase because as the resolution gets higher, so the jump of picture elements between successive frames becomes more perceptible.   Yet, proposed deployments of 1080p are set actually to reduce the frame rate in order to avoid increasing bandwidth too much, which would make the whole exercise pointless. In fact, 1080p needs a frame rate of at least 50 fps, and ultra HD, or 4K as it is often called, will require 100 fps. Some trials have been looking at a higher frame rate of 120 fps for 4K, which immediately introduces a conversion problem if content shot at one rate then has to be displayed on TVs that support another. For this reason, there are proposals to capture video at the high rate of 300 fps, partly because this can be cut readily down to both 120 fps, by dividing by five and then multiplying by two, and also to 100 fps, dividing by 3. It would surely make more sense to standardize on 100 fps, but that remains to be seen.

The nest question is over resolution itself, with the starting point being a law called Rayleigh’s criterion, which defines the smallest distance that can be resolved by an imaging system, determined by the wavelength of the light being received and the diameter of the object lens — in this case, the pupil of the human eye. While this varies slightly according to the color content of the image and the individual concerned, it is around 1/60th of a degree. Under normal vision, the viewing angle is within 30° horizontally and rather less vertically, so doing the math, that comes to a maximum of 1800 picture elements across the width of the screen. That is just covered by 1080p with its 1920 pixels across the horizontal.

At first sight, then there seems to be no call for anything beyond 1080p at all. But, this reckons without the impending revolution in display types with huge wall-size screens well over the horizon now. It is true that the smallest angle that can be resolved depends not on the screen size but on the angle of viewing. On that basis, a giant screen viewed from 100ft does not need any more picture elements than, say, a tablet close up, although each pixel would have to be proportionately larger.

In practice, though, large screens do require more picture elements because there are situations where they may be viewed from closer up than the normal optimum distance. For example, wall-sized displays will comprise multiple smaller panels, each of which can function as independent TVs, in which case they will sometimes be viewed from closer range and require smaller picture elements than they otherwise would. Further to that, these large screens will enable immersive viewing where the horizontal viewing angle  will be much greater than the current 30° limit. That is why we will need ultra HD.

It may well be, though, that we will not need to go much further, and there may never be a call for the next level up, which is 8K at 7680 x 4320. Or, certainly never beyond that for viewing on two-dimensional screens. But, even then, the bandwidth implications are considerable, and, as the EBU has pointed out, often misunderstood. Even without upping the frame rate, ultra HD at 4K generates 4X as much data as 1080p, which, in turn, is double 720p or 1080i. 8K brings another fourfold increase again, and, if combined with a frame rate of 300 fps as may come to pass in a decade or more, the bandwidth consumed would be 192X greater than current HD services. That is why the EBU talks of the dramatic financial impact of "Beyond HD," which, therefore, will have to be plotted carefully. It is true that for distribution, we have the emerging HEVC, but that will only bring an immediate 50 percent or so improvement in encoding efficiency over H.264. So, while welcome, this will merely provide mild pain relief for congested networks.

On top of that, there is scope for increasing the range of colors in line with the higher resolution, and generally the bit depth for each pixel, which would rack up bandwidth further. There is also 3-D (another subject altogether for a further blog perhaps), and then, finally, the EBU talks about “beyond stereo."

There the mind really boggles.

By Philip Hunter, Broadcast Engineering

Sensor Size Comparison Chart


Click to enlarge

By Jon Fry, Creative Video

MXF and AAF

Many broadcasters know about MXF, and they have heard of things such as MXF for Finished Programs (AS-03) and MXF for Commercials (AS-12). But, this month, I want to focus on MXF’s bigger brother, AAF.

In the middle 1990s, a joint task force was created by the SMPTE and EBU. The purpose was to address the impending flood of digital video proposals. There were a number of different, competing proposals on the table regarding compression, handling of metadata and the exchange of program material as files. Many in the industry were concerned that without a concerted effort, the market would fracture, leaving end users to sort it out. Fortunately, the task force produced a number of recommendations that later led to standards that have helped drive industry consensus about what constitutes interoperable digital video.

The remit of the task force went beyond coming up with recommendations for interoperable digital video formats, however. The final report included in its name “Harmonized Standards for the Exchange of Program Material as Bitstreams.” The group spent a significant amount of time working on something called wrappers. After spirited debate, it was decided that two classes of wrappers should be developed — one for broadcast and one for editing. The group felt that a single wrapper could not accommodate the differing needs of these two application areas. Ultimately, the wrapper for broadcast became MXF, and the wrapper for editing became AAF.

Before we delve fully into AAF, let’s talk about wrappers, what they do and why they are important.

Inside the Wrapper
When using professional digital video, specifically SDI video, the relationship between video and audio is set in the standard. SMPTE 259M, and later SMPTE 292M, for HD, not only specified how video and audio should be streamed, but they also were specific about where additional information such as subtitles and timecode should be carried. Other standards specify exactly how this additional data should be formatted. For manufacturers and users, the world was relatively simple; a stream arrived and was comprised of interleaved pieces of video and audio, along with some “essence data” such as subtitles.

But, in a file-based world, there were many possible ways to exchange the same program material contained in the SDI stream. Should you send a video file, followed by a separate audio file, followed by a data file that told how to play back the two files in sync? Should you send a single file with everything in the same file? Should the video be kept separate from the audio, or should it be mixed together, as in an SDI stream? Where do you put the all-important timecode? And, how do you relate the timecode to the video and audio to which it refers?

Those were just some of the questions that surfaced when we looked at transporting programs as bit streams. The wrapper gave us a way to describe how the video, audio, subtitles, timecode and other “essence” should be packaged together in order to be sent from one place to another. A wrapper can contain video, audio and data essence (subtitles). The concept of a wrapper as a way to organize essence is common in both MXF and AAF. This is a simplistic drawing, but it gives you the idea that the wrapper contains video, audio and data, along with identifiers that are used to keep track of each essence component.


A wrapper can contain video, audio and data essence such as subtitles


The reason I say that this drawing is simplistic is that there are further definitions within MXF itself that constrain the possible arrangements of essence within the file. For example, MXF OP-Atom requires that only a single essence component be included in a file. In other words, an MXF OP-Atom file contains only a single video element or a single audio element. MXF OP-1A allows the combining of video and audio in a single wrapper. But, again, how are the essence types laid out in the file?

Some MXF files contain video as a separate entity from audio. Others contain interleaved video and audio, meaning there is a single essence file that contains a bit of video, followed by a bit of audio channel 1, followed by a bit of audio channel 2, and then back to video again. There is also room in the interleaved file for subtitles.

One last important point: MXF and AAF specifically allow someone who receives the file to understand how video, audio and essence data all relate on a timeline. Of course, this is critical to working with professional video.


The Need for AAF
If MXF contains many different possible layouts, you may be wondering why there is a need for AAF at all. The reason is fairly simple. AAF allows you to wrap up many different tracks of video, audio and data essence, and describe how these different tracks relate to each other. Think about layering or compositing in an edit application. For those not familiar with this idea, layering allows an editor to superimpose one layer of video, say an animated graphic or a window of video, on top of another base layer. Compositing is a common application in editing, but it is not common in the on-air environment, where users generally are playing back finished program material.

An AAF file could contain a base layer consisting of a video and two audio tracks, a compositing layer consisting of another video and two audio tracks, plus some editing metadata that instructs how to manipulate these two pieces of video to produce the finished result.



 
An AAF file could contain a base layer, a compositing layer and some editing metadata


Another difference between AAF and MXF is a result of what I stated above — that MXF is primarily intended to be used in an on-air environment, while AAF is generally intended to be used for editing. Therefore, it is a common user requirement that MXF files be complete and ready to play at any time. Therefore, many (but not all) MXF files contain video and audio inside the file. By contrast, it is not uncommon to find that an AAF file that contains only references to external essence. In other words, the AAF file is small and only contains metadata, including pointers to the actual essence files and other metadata with instructions regarding what to do with them.

So, why is there this fundamental difference between AAF files and MXF files? Well, imagine you are in an air playout situation. The last thing you want to find out just as you are going to air (or after you have pressed the “play” button) is that an audio or video track cannot be located on a remote storage device. Since MXF files need to play when the “play” button is pressed, it makes a lot of sense to package video and audio inside a file.

By contrast, think about a typical broadcast promo. If your station produces local promos, you might be surprised to find out how many separate video, audio, graphics, subtitle and descriptive video elements go into a simple 15-second promo. Now, imagine a half-hour, multi-camera, pre-produced news program. This edit project could have more than 1000 (several thousand, actually) individual elements associated with it. It is likely to be more efficient to organize separately the different elements going into this program on disk, perhaps using shared storage. An AAF file could then be a lightweight, metadata-only file, with pointers to the actual content stored on shared storage. This is a common way to build a professional video editor.

So, AAF and MXF may be used differently, depending on the application. For finished programming, use MXF. For an edit environment, or an environment where you need to describe the relationship between a number of separate elements, use AAF.

Lastly, AAF and MXF are interchange formats and meant to be the lowest-common-denominator to get content from one system to another. It is common or likely that inside a system, you will not find AAF or MXF (with some exceptions). Also, since these are baseline formats, it is common to find capability-adding extensions. But, these extensions could hamper interoperability.

By Brad Gilmer, Broadcast Engineering

Content Preparation for Adaptive-Bit-Rate Video

Today’s media landscape is radically more diverse than just a few years ago. The delivery of consistently acceptable image and sound quality is taken for granted by viewers, despite uncertain or fluctuating bandwidth. Adaptive-Bit-Rate (ABR) streaming technology makes this possible.

What is ABR Streaming?
ABR streaming is a delivery technology designed to provide consistent, high-quality viewing in situations where bandwidth may fluctuate, and where viewers may be on a wide range of devices.

Prior to ABR streaming, Web or mobile video delivery was typically done by encoding a single downloadable file or stream at a fixed bit rate and frame size. Viewers could buffer some of the video, and then simultaneously download and play it back. This delivery model was similar to cable transmission, where a single bit rate is transmitted over a reliable medium.

Unfortunately, transmission mediums for Web and mobile devices are unreliable, and bandwidths vary. During fixed-rate video playback, viewers with low bandwidth suffer from excessive buffering (delaying playback). To compensate, providers have tended to encode at lower bit rates, punishing viewers with high bandwidth. Even then, any fluctuations in bandwidth can cause buffering delays.

To solve this problem, ABR streaming content is encoded into multiple layers, each potentially a different bit rate, frame size and/or frame rate. These layers are combined into a single package that represents the original content. ABR players switch between layers depending upon the device and available bandwidth, to ensure consistent high-quality playback.

For example, a single ABR package might include six layers, each encoded at progressively higher bit rates. As a viewer watches content on his/her mobile phone during a train ride, the player will adaptively switch between low bit rates and high bit rates, depending upon the connectivity of the device.

How Does it Work?
Most ABR streaming technologies use standard Web protocol (HTTP delivery) to send video. This offers advantages over specialized streaming protocols such as RTSP or RTP, as HTTP-based delivery works immediately on Internet networks and can take advantage of edge technologies designed to cache HTTP requests.

During playback, video and audio are delivered via HTTP in small fragments, each representing some small amount of video, typically between 2 and 10 seconds in length. Each content package includes multiple layers, and each layer may include many fragments. For example, an hour-long movie may have 12 layers, each with a thousand fragments. The player is provided with a package manifest file outlining which layers are available and the location of the fragments for each layer.

During playback, the player requests and downloads a fragment from a layer. While the fragment is played, the connection speed is monitored, and the player may opt to switch layers, either increasing or decreasing the video bit rate based upon the connection speed. Players may also choose layers with different frame sizes or frame rates to optimize the visual experience for the device. This adaptive behavior is what ensures consistent playback regardless of connection speed or device.

There are several different ABR streaming technologies available: Apple HTTP Live Streaming (HLS), Adobe HTTP Dynamic Streaming (HDS), Microsoft Smooth Streaming (MSS), and more recently MPEG Dynamic Adaptive Streaming over HTTP (MPEG-DASH). Each technology requires a complete ecosystem. The content must be prepared correctly, and the correct player must be used. All of the technologies work fundamentally in the same manner, using HTTP for content delivery in fragments.

Where these technologies differ is largely related to the structure of the underlying packages. For example, HLS for older versions of iOS requires a separate file for each video fragment. In contrast, most other packages store fragments for a layer in a single file, allowing the player to download fragments using HTTP byte range requests, which download a small part of a larger file.

Other differences in ABR technology relate to the viewer experience. Apple HLS, for example, provides for a dedicated key frame layer, allowing users to scrub through the video quickly. Other packages allow an audio-only stream with a poster image for extreme low-bit-rate situations.

Preparing Content
Preparing ABR content takes several steps. First, the desired packaging and layer structures need to be identified. Next, content must be encoded, checked for quality, packaged, encrypted and delivered.



ABR production workflow


Choosing Packaging and Profiles
Packaging choice is generally driven by what devices must be supported. Not every device supports players for every type of ABR streaming technology. As a result, one should catalogue both the devices and the players that will be supported. The necessary packaging will naturally become apparent as a result.

The selection of optimal bit rates, frame sizes and frame resolutions will vary depending upon device types, connection types and encoding technology. Apple and Adobe provide excellent starting points with suggested profiles suitable for their ecosystems. However, practically speaking, the entire catalogue of devices, expected network connections and network costs must be considered when designing layers.

With these considerations, layer design is a balancing act between frame size, bit rate and quality. However, the actual encoding technology used may have the biggest effect upon quality. For example, one study performed by the MSU Graphics & Media Lab showed that the use of x264 encoding technology saved necessary bit rates by as much as 50 percent compared to other H.264 encoding technologies at the same quality level. As a result, it is recommended that layers be designed while performing actual encoding tests with the final encoding technology.

Most packages, however, generally contain between 16 and 24 layers. Part of layer design will require a reduction in the number of layers. It is best to select a few common native display frame sizes (such as 1080p) and then encode multiple bit rates to those frame sizes. Doing so will avoid unnecessary performance degradation on players that use software scaling (particularly important for Adobe HDS).

Encoding, Packaging, Delivery and DRM
Each layer will require that a complete H.264 stream be encoded. With 16 to 24 layers, encoding an ABR package can easily require 20 times the processing power needed for a single H.264 stream. Fortunately, highly parallelized multirate H.264 encoding technology exists that re-uses information across the different streams. When combined with GPU acceleration, today’s encoding systems can offer 10 or 20 times the speed of CPU-based systems.

When preparing for multiple devices, an important aspect of encoding is transmuxing, the ability to re-use encoded H.264 streams across multiple package types. This prevents having to re-encode the same bit rates simply to package the video differently.

With on-demand content, it is important to perform QC checks on the different video streams. QC may be performed visually or by using automated tools that measure quality across all of the streams.

On-demand content often requires user authentication and protection prior to playback, which requires Digital Rights Management (DRM). When using DRM, the video must be encrypted during packaging, typically using AES 128-bit encryption. DRM systems typically have subtle requirements for how the encryption is performed by the encoder or packager, and it is important to validate that the two are compatible.

Finally, content delivery will be performed, either as a compressed TAR file or in the native package form. Where possible, it is recommended that the entire production process (ingest, encoding, transmuxing, packaging, quality control, encryption and delivery) be combined into a single automated workflow. Manual steps will significantly slow production time and may result in errors. It is also recommended that the ABR production process be combined with non-ABR production into a single automated system. This reduces system maintenance costs, offers a single view into the overall content production for all distribution channels and allows workflow efficiencies such as unified metadata preparation and content preprocessing.

Conclusion
Preparing video for ABR streaming generally requires research up front to choose technologies and encoding profiles, and a well-integrated, accelerated encoding approach to ensure workflow efficiency. With today’s tools, it is possible to fully automate the ABR content production workflow with full integration into existing content preparation and delivery workflows.

By John Pallett, Broadcast Engineering

Lytro Unveils Perspective Shift and Living Filters

Lytro, creator of the world’s first consumer light field camera, announced it will unveil a new light field technology capability for the Lytro camera with Perspective Shift as well as new creative tools with Living Filters. These features will be available to customers starting December 4th via a free Lytro Desktop software update.

Perspective Shift lets Lytro photographers interactively change the point of view in a picture after it has been taken. On a computer or mobile device, viewers can move the living picture in any direction – left, right, up, down and all around.

When pictures are shared to the web, Facebook and Twitter, friends can experience Perspective Shift without needing any special software. Perspective Shift also works retroactively on any light field pictures previously taken with a Lytro camera.



In addition to Perspective Shift, Lytro announced a new way to enhance light field pictures with Living Filters. With a single click, Lytro photographers will be able to apply one of nine interactive filters to their pictures and change the look of the picture based on light field depth. Unlike traditional digital photo filters, Living Filters create additional effects as viewers interact with a picture. Living Filters is a free update to Lytro Desktop and works on all Lytro light field pictures, including retroactively.

New Living Filters:

  • Carnival: Twist and distort your picture as you refocus and change perspective as if you’re in a funhouse of mirrors.

  • Crayon: Add a touch of color to a monochrome version of your picture. Click to focus and add color into your scene, or change your perspective and add color back into your scene as you explore.

  • Glass: Put a sheet of virtual glass into your scene. Everything in front of where you click will be unchanged, and everything behind will appear to be behind a piece of frosted glass.

  • Line Art: Reduce your scene to a grayscale outline, seeing more detailed lines where you refocus.

  • Mosaic: Create a tiled mosaic in the out-of-focus parts of your scene as you click or change your perspective.

  • Blur+: Significantly enhance the amount of blur in the out-of-focus parts of your scene.

  • Pop: Make parts of your scene pop out with extra detail and vibrancy when those areas are clicked.

  • Film Noir: Add a moody and stylized black and white look to your pictures, with a little bit of extra detail and color where you click.

  • 8-Track: Bring back the ‘70s with this filter that adds an aged, vignetted look to your pictures. Click to un-age parts of your scene and see them come back to life.





If you’d like to know more about how Living Filters work as well as future possibilities, read this whitepaper about the science inside.

Picture galleries:

Source: Lytro

How to Pick the Best DSLR Lens




By Kevin Good, Weapons of Mass Production

NEC StarPixel Codec Delivers JPEG2000 Quality at JPEG Compression Speeds




More information

By Don Kennedy and Ryo Osuga, DigInfo

DASH-JS with ISO Base Media File Format Support

DASH-JS has been updated to the latest version of the Media Source API and now supports ISO Base Media File Format (IBMFF)-based media segments. Now the latest version (v.23) of Google Chrome supports the Media Source API by default, enabling the playback of WebM and IBMFF media segments.
DASH-JS is a seamless integration of Dynamic Adaptive Streaming over HTTP (DASH) into the Web using the HTML5 video element. Moreover, it is based on JavaScript which uses the Media Source API of Google’s Chrome browser to present a flexible and potentially browser independent DASH player.

By Stefan Lederer, ITEC

Blackmagic Cinema Camera Review – DSLR killer?

The Blackmagic Cinema Camera is the most Apple-like camera I have ever used, in fact the only camera. Like the iPhone it stands out on the market as better than the mass market clones, at least on paper. Is it worth the wait?

When the Blackmagic Cinema Camera appeared out of nowhere back in April the first thing I thought was “finally a camera designed from scratch for filmmakers that does not cost $15,000″ and my second thought was “this is the future, better pre-order it quick”.

The short wait to a July release date now seems like a different era altogether but if ever there was a camera worth being patient over, this is it.

Despite being a niche product, at $3000 the Blackmagic Cinema Camera has Apple-like mass market potential. Apple themselves have seemingly embraced it as a flagship product for Thunderbolt and earlier this year during a major product launch the Blackmagic Cinema Camera was seen pictured on a keynote slide hooked up to new MacBook Pro via Thunderbolt. In the same way musicians now use affordable software to mix and record, cinematographers have the Blackmagic Cinema Camera.


DSLR Killer
They also have DSLRs of course, and these are readily available at a huge range of specs and price ranges. Does the Blackmagic Cinema Camera beat them on the image quality front? The short answer is yes.

Even in low light it is a performer, which surprised me. This is genuinely a mini Alexa and going back to compressed 8bit 1080p afterwards with less detail, a less organic look, blotchy noise and all those burnt highlights and crushed blacks really is shocking the first time you compare a DSLR to the Blackmagic Cinema Camera. This is a big subject and I am doing a very broad shootout in Berlin with Slashcam at the moment which will be online in a few days. I can’t cover everything at once in one review so I am spreading stuff out – low light, dynamic range, resolution – it will all become clear. I’ll round up my thoughts about image quality in this review, under the image quality section later in the post.






Disruptive
The BMCC has a minimalist approach to design. I’m a big fan of simplicity.

This camera is a disruptive piece of technology that moves things forward. Hopefully the big companies will follow its lead – not in 4 years, but now.

Technology has made cost effective digital cinema cameras a reality but not many companies have actually made one!

Panasonic have served us well with the filmmaker orientated GH3 ($1299) and before that the GH2 punching well above its price tag – but there’s no escaping that these are predominantly still photography cameras, rather than dedicated filmmaking tools. Sony with the FS100 (now $4000 used) comes close to ‘prosumer’ pricing but at launch it cost twice as much as the Blackmagic Cinema Camera. GoPro meanwhile are doing a superb job in the area of extreme sports filmmaking and also with their CineForm raw codec. Canon blazed a trail with DSLR video in the consumer camera business but then seemed to backtrack massively in order to focus on maintaining huge margins on Cinema EOS products. I’m concerned their DSLRs haven’t moved forward or innovated enough for filmmakers and they will lose sales.




 
Blackmagic Cinema Camera and Sony NEX VG-900


I think the competition for this camera is the Sony FS100, Panasonic GH3, Canon 5D Mark III and Nikon D800. Aside from the ill-fated Panasonic AF100 most other cinema cameras cost a minimum of $15,000, including the ‘bargain’ battle tested Red offerings when you add up all the bits. For the 1% of filmmakers, price isn’t a factor but for the 99% of us it is. The consumer market is larger than the professional market and Blackmagic Design have taken a good business decision with this low pricing.

Jim Jannard of Red once tried to make such a camera, the aim was 4k for $4k. Blackmagic Design, a post production company from Australia until now had no camera division and no experience of making one. Yet here they are delivering a near-3K raw image for $3k where Red did not. Pretty impressive!

Grant Petty seems to be leading a very engineering orientated company at Blackmagic Design, they’re more nimble than giant corporate manufacturers and less political. Canon, Nikon and the rest of the big guys can really take a lesson from Blackmagic when it comes to cinema and video. Already Blackmagic Design have responded to demand for a Micro Four Thirds mount version, no committee meetings or 2 year lead times… They just did it.

The Apple-like nature of the Blackmagic Cinema Camera (BMCC) is in a multitude of aspects. It shoots ProRes, the native editing format of Final Cut Pro. The build quality is up to Apple standards. In fact there’s no plastic on the body at all. Rubber is used where plastic would be on a DSLR and the chassis is really tough aluminium. Apple’s other major strength is software and the software side on this product is unbelievably good. You will not be getting an application like Blackmagic DaVinci Resolve with your DSLR any time soon. Silkypix anyone? Canon EOS Utility? Blackmagic’s great strength is not just in hardware but in software. So important.

The large touch screen is more like a stand-alone monitor for a DSLR but built in. Much more practical than a tiny DSLR LCD but a shame it cannot be articulated to a different angle. It is slanted upwards for tripod use which is better than dead-on straight like the 5D Mark III’s screen but shots above eye-level are tricky. The other quirk is that on my EF Blackmagic camera I’ve had a few aperture-related bugs and the iris control system is more awkward than it should be. You hold the iris button and use the bottom back / forward playback keys to change the iris. Auto-iris is set upon a single press of the iris key, which usually gives you roughly the right exposure depending on shuttle angle, etc. But I prefer to set exposure with NDs not aperture. Why not have dedicated up / down keys instead of the one iris button, or better still a jog wheel?




 
The Blackmagic Cinema Camera eats dynamic range for breakfast


A lot has been said of the Blackmagic Cinema Camera’s ergonomics being odd… But iris quirks aside, I don’t agree it is ergonomically poor. The extra weight over a DSLR really helps the look of handheld camera work. The controls have very positive feedback so you know that camera is reacting to input. It needs a rig where a DSLR or an FS100 needs a rig – where’s the drama? The boxy form factor isn’t a problem, rather an advantage because it is malleable to individual needs, it doesn’t box you in and the camera is very compact.

The screen is crisp and the double-tap magnified focus assist shows superb quality at 1:1 pixel level. Unfortunately with the current firmware it only magnifies the centre and you can’t drag it around on the screen like you can on the Panasonic GH3. This hopefully can be changed in a firmware update. There’s also focus peaking on the press of a single physical button and that tends to work well, and stays on in the magnified focus assist too. Generally the camera lets you get on with the shoot and it saves the complexity for post. Just what I want.

The user interface is very responsive and quick to use, with no nested layers of menus to scroll over or dig down through. Despite this simplicity and Steve Jobs-like minimalism on the surface, there’s a massive amount of power and depth to it as well. There’s no way I can cover everything there is about this camera in a review. There will be books written!


Latitude and 2.5K Raw
Shooting with the Blackmagic Cinema Camera, you have to throw the DSLR rule book out of the window. Until now I’ve tended to expose more with the highlights in mind, preventing any over exposure which results in these blowing out. The Blackmagic Cinema Camera goes deep into the highlights and I expose by bringing up the blacks and if parts of the scene are over exposed I just don’t worry that much. This is an extreme example but just look at the latitude this thing has in raw…





I assumed this shot was dead. No way to bring it back. I was wrong!

You basically get the full extent of ISO from 200 to 1600 in post… And more.

Imagine over exposing a DSLR shot at ISO 1600 which should have been done at ISO 200. You’d not be able to recover that later but with this camera you can.

All this dynamic range (in post!!) and the 2.5K resolution are wonderful to have but you absolutely cannot mess around on the post side if editing raw. You need the right hardware especially the right dedicated graphics solution and tons of hard drives if you plan to archive regular work in raw format.


Editing Hardware for Raw
At the bottom of the price scale you can get away with a $150 NVidia GeForce 560 Ti 1.5GB. Resolve doesn’t run well on my 2011 MacBook Pro / ATI Radeon even though the graphics card was cutting edge in the Apple line-up only last year. ATI cards use Open CL and Resolve prefers NVidia CUDA to process video. (The solution for 2011 MacBook Pro users is to try Adobe SpeedGrade instead – this is a big subject and I will write more about this in a future article).

I’d happily shoot ProRes or Avid DNxHD on this camera because the image in 1080p is very good. Good detail, grades well from the flat film-mode profile. There’s also a Rec. 709 video gamma mode if you want punchy material straight out of the camera like on a DSLR rather than grading from a flat image profile.

The dilemma however is that 2.5K is a big step up from 1080p and you see this on a display which supports 2.5K like the Dell U2711 or new MacBook Pro Retina. It is hard to go back to old 1080p when you see this. 2.5K raw also upscales to 4K better than 1080p.

There’s no 2.5K ProRes mode so you have to transcode the 2.5K raw to 2.5K H.264 or CineForm in Resolve. Then you are left with a difficult decision – whether to dispose of the raw material or archive it. At 7GB per minute or 45 minutes per 240GB this isn’t a decision taken lightly.

If you’re a creative filmmaker doing one very important short film a year, this is less of a problem. If you’re a production house or a regular shooter churning stuff out in 1080p it makes absolutely no sense to shoot raw. But artists like Tom Lowe of Timescapes or myself love the challenges and the extra image quality is FAR more important to me – despite the slower workflow and insane space requirements.

Of course it is entirely up to you to decide which format is right for you and what kind of shooter you are. The camera gives you a choice! A lot of the fuss about both the ‘difficulty’ of shooting raw and whether it is needed should be entirely ignored. It isn’t difficult at all. You press a button, and your hard drive fills up! This situation will change as time progresses.

I need raw. I love raw. And so will a great deal many more users of this camera, especially professional photographers at the highest level who are used to a raw workflow in their stills day job and don’t want to compromise when it comes to motion. It can save your ass, speed up a shoot, save on the lighting complexity and costs, take the risk of out operating the camera and in the end save you possibly more money than it costs in hard drives!


Windows Rig for Raw
The most effective way to edit raw without throwing out your Mac or breaking the bank is to invest in a Windows PC. Just be aware that you will lose your hair after a fortnight of constant FAFFING about. Windows is not OS X. It never ‘just works’. You need to make sure drivers are updated and that the planets are aligned. If you buy a very fine Dell XPS second hand like I did only to discover it lacks built in WiFi and the operating system is in German, all I can say is – be prepared for hell. Nothing is simple in Windows land. Those who like tinkering will understand and have probably long mastered this (as I did as a PC user 6 years ago) – but those from a predominantly Mac background will likely weep several times with unrelated-to-editing system tasks before their PC DaVinci Resolve editing rig finally works properly. Or even pairs with their Bluetooth keyboard. Or read the BMCC SSD because the camera only reads Mac formatted drives and Windows… You’ve guessed it. You need a $50 driver for that! Check out Mac Drive 9 Standard – works great.

The pain is probably worth it to most. To get the same performance on a Mac Pro as I do on my $999 Windows PC, I’d have to give a hard earned $5000 to Apple… AND have the hassle of swapping out the ATI graphics cards for NVidia with CUDA support. Truly crazy situation and something Apple really need to address. Their iMac GPUs are based on mobile models and their Mac Pros are rather over priced. You really need a beefy desktop NVidia graphics card – the ones gamers use – to edit raw with Resolve on a Mac.


Resolve as an Editing Tool
DaVinci Resolve may conjure up thoughts of little understood Hollywood colourists and production hardware which wouldn’t look out of place on the bridge of USS Enterprise but it isn’t actually that complex. It even has a Final Cut Pro 7 style multi-track video timeline and editing functionality, so you can deliver the entire film from Resolve if you need to without further work in Premiere or Final Cut. I found myself very happy to use Resolve for editing raw as well as grading.

The only thing missing for me was multiple audio tracks, added plugins and features I use in Premiere, for slow motion, etc. You can simply render out the footage (with camera recorded audio) from Resolve’s timeline and fine tune the edit in the usual NLE package of your choice. With a suitable graphics card the final rendering performance in Resolve is much faster than Premiere Pro CS5.5 delivers a DSLR timeline with the CPU, almost real-time 24fps in fact.


Image Quality
Strangely, I really don’t feel this camera has had the plaudits it deserves on the image quality front yet. Why not? It is utterly incredible.

Resolution in 1080p is up to the standard of the Canon C300 which is remarkable enough for a $3000 camera yet in 2.5K raw it exceeds this camera and almost every camera on the market aside from 4K models and the 2.8K Alexa. It isn’t just 2.5K, it is good 2.5K.

Also, for 1080p delivery ProRes is a better codec than on the Canon C300 and it also records in Avid DNxHD. Apple and Avid native editing right off the camera is a far superior solution to any so-called broadcast ready MPEG codec!

All this is huge stuff!! And $3000!




 
Click to enlarge


In terms of moire and aliasing it stands up well. On this fine brick pattern for instance, no real issues. In fact I haven’t had any real-world problems yet but that isn’t to say there aren’t any. There’s some issues with micro-moire on very fine textures (very faint outlines of red or green pixels which don’t shimmer and can be removed in post) but you have to really pixel peep and it isn’t noticeable otherwise. If you remember the 5D Mark II and the way it used to give some false colour aliasing and false detail over very fine textures, this is similar but the output is giving far more detail to begin with and there’s not the same propensity to flare up in moire hell because the sensor isn’t downscaling like on a DSLR.

One of the biggest surprises I had was to have my trusty Panasonic GH2 throw up moire where the Blackmagic Cinema Camera had none. This happened whilst setting up the next test shot below. Now in real-world shooting outside of just tests and charts the GH2 hardly ever suffered from moire. I never had any real issues in all of the 2 years I was using the GH2!! It is pretty safe to assume this isn’t going to be a big problem on the Cinema Camera.



Smoother Gradation, More Film Like Image
The following shots aren’t a moire test, I plan to do one later this week on a chart with Slashcam.de.

These are to test gradation, banding, highlight fall off and colour.

Bear in mind these centre-crops are an 8bit JPEG on the web so it doesn’t really do the Blackmagic Cinema Camera proper justice but still the difference is clear.

All test shots are F2.8, ISO 800…




 
FS100 is banding hell in the highlights!


I have 2 remarks. First is that 2.5K does make a different over 1080p – much nicer resolution and a finer grain of noise. Look at the silver lens barrel bottom right and the black grip on the Fuji X100. Secondly, with 12bit raw you really can have as gentle or as steep roll off from the highs and lows as you want, all in post. Here I tried a very gentle slope, bringing the shadows up and recovering the highlight in the middle. On the FS100 shot believe it or not I did exactly the same and look what happened… It fell apart – badly. The highlight roll off is practically in 3 stages!! I wasn’t able to get it as flat or as organic looking. And I like the FS100!

The Blackmagic shot isn’t as punchy as the FS100 in this example nor is it meant to be. That punchy contrast is a shortcoming of the FS100. I graded for a low contrast, gentle roll off and the FS100 was not capable of giving me that. The Blackmagic stuff can be graded for a punchy high contrast look if you need it.

Keep in mind the above Blackmagic shot as we turn our attention to the GH2 and 5D Mark III.





Now the 5D Mark III does better than the FS100 in the highlights but the shadows are crushed and noisy. On the GH2, shadows and gradation can be weak-points and it shows pretty badly here. Trying to match it in post to the Blackmagic on this test was impossible. There’s some banding where smooth gradation should be and absolutely nothing there in the shadows despite my attempts to lift them. Exposure was identical on all shots but those blacks just were not recorded on the GH2. Look at the camera grip, it is jet black, the detail has gone. As with all the other cameras aside from the Blackmagic, highlights are also burnt on the GH2 and there’s a much harsher electronic look around the edge of objects next to the light source.

Even when converted to 8bit JPEG for the web, the Blackmagic is in a different league to the FS100 and the current cream of the crop DSLRs for handling of smooth gradation.





Now if you’re wondering what this has to do with a cinematic image, the key to this is how the eye sees. None of the distracting electronic artefacts you see in the strips of tones and shades above should be there. They’re not natural. Not cinematic. The Blackmagic Cinema Camera is more natural. When you want more contrast between bright and dark areas of the image you simply use a steeper curve in Resolve – and in doing that, the image responds beautifully. It doesn’t get a bad case of banding or weirdness and it doesn’t get burnt out.

The resulting full frame is smooth and cinematic, and looks more organic than the same shot on a DSLR.







Of course the bottom shot is from a DSLR.

When you chop all this stuff up and present it on a blog by the way, the differences are far less than when you see it on a job, or in a theatre, or on in motion on a 2.5K display.

The verdict is clear – this camera is an extremely cinematic beast.

Next I have graded two shots the same in Resolve, to reveal optimal highlight and shadow detail just before the point where the noise in the shadows got too much or the highlights began to fall apart. Unfortunately on the 5D Mark III the highlights fall apart by default in-camera before grading, so easily do they burn out. The Blackmagic Cinema Camera here is all over the 5D Mark III producing a far more life-like image as the eye would see it, tons more latitude, crisp detailed blacks and none of the extremely sudden fall off in the highs and burnt out highlights.




 
The Blackmagic shot was graded in Resolve using the film dynamic range
and the cinema camera LUT applied


How much of the dynamic range is usable? To my eye – a lot. How good is it in low light? Actually very good. From my early impressions of low light, you have to boost a DSLR to ISO 6400 just to match the black level on the Blackmagic Cinema Camera at ISO 1600. In doing that you get far less dynamic range on the DSLR and any normal-light areas or specular highlights get burnt out. Not so on the Blackmagic.



 
Low light test at the equivalent of ISO 1600 on a DSLR, shot at 360 degrees shutter (1/25), F2.0
(Click to enlarge)


Sensor Size
The Blackmagic Cinema Camera has the same field of view as the Olympus OM-D E-M5 in video mode (with IS enabled you get a small further crop on that camera). The Blackmagic is 2.3x crop. Micro Four Thirds is 2x crop and the Panasonic GH2′s multi-aspect sensor is 1.86x crop in 16:9 allowing for a wider than 4/3 ratio sensor.

Canon APS-C is 1.6x, Super 35mm and the FS100 is 1.5x and full frame is of course 1.0x, no crop factor to consider.

I was able to match the field of view on the BMCC to my 5D Mark III, Sony FS100 and Panasonic GH2 by using the Canon EF 40mm F2.8 pancake on the Blackmagic, 90mm on the 5D Mark III, 58mm on the FS100 and 45mm on the GH2. The BMCC has noticeably less of a shallow depth of field at 40mm F2.8 than a full frame sensor enjoys at 90mm F2.8. To compensate you need to shoot from further back with a longer focal length and faster aperture, it also helps to bring your subject closer to the lens.

Is this a deal breaker? No way.

This is precisely what I’ve been doing for the last 2-3 years on my Panasonic GH2. It is just that the shallow depth of field effect is more pervasive on the 5D Mark III. This can work against the image as well as for it. It isn’t always desirable to have. The Blackmagic Cinema Camera is certainly easier to focus. In low light where stopping down to F5.6 would kill the exposure on full frame, you can keep it at F2 and enjoy manageable focus.

I feel with any sensor smaller than 1″ (Nikon J1, Sony RX100) or smaller than Super 16mm you reach the point where the aesthetic drops off in a bad way. The Blackmagic Cinema Camera’s sensor is not at that point and can safely be considered ‘a large sensor camera’ in video terms.


Lenses
Here’s what I’ve been using so far on the Blackmagic Cinema Camera:
  • Tokina 11-16mm F2.8 – versitile wide angle on the Blackmagic camera, not that there are many other alternatives. Less distortion than the Sigma 8mm and faster aperture

  • Samyang 35mm F1.4 – a good standard lens, not too long. Sharp wide open and that fast aperture comes in very handy for both shallow DOF and low light

  • Canon 135mm F2.0L – not a cheap telephoto option but a very very good one

The new Canon EF STM 40mm F2.8 pancake works on the camera even though the focus ring is fly by wire. I’ve had no issues with it. Prefer the Samyang though! The pancake is far better on the 5D Mark III where you get a nice mild vignette wide open and a much more shallow depth of field.

The 11-16mm Tokina is equivalent to a 24-35mm wide angle on the 5D Mark III.



 
A wide angle isn’t a problem on the Blackmagic Cinema camera


What about the camera mount version – EF or MTF? The Canon EF mount is actually quite adaptable, you can use Contax Zeiss, Nikon F, M42, Leica R and Olympus OM. Your Canon EF and EFS lenses won’t work on the Micro Four Thirds mount without an adapter which does aperture control electronically and there’s not really a decent one of these on the market yet in my opinion. What is enticing for me about the Micro Four Thirds mount Blackmagic camera to come in 2013 is that I can use OCT 19 anamorphic LOMO lenses on it, as well as PL and Leica M, plus Canon FD glass. To use my Voigtlander 25mm F0.95 and SLR Magic 12mm F1.6 on this camera would also be great for low light.

However a Blackmagic Cinema Camera Mark II with E-mount and Super 35mm sensor would be even more desirable. The Metabones adapter for EF to E-Mount allows your Canon glass to work pretty much perfectly on E-mount. This way you have best of both worlds and no crop factor to consider over the normal cinema standard. You could use the same cinema lens on an Alexa and BMCC Mark II and the field of view would be the same.


Conclusion
12bit raw might be the headline spec but it isn’t all the camera does well.

When people didn’t have 24p, frame rate was what made the fabled cinema look. When people didn’t have large sensors, shallow DOF was suddenly the cinema look. Inundated with large sensors and shallow DOF, some might now consider a raw codec as an essential part of the cinema look. All this kind of thinking is flawed. The cinema look is about carefully balancing every single aspect and minimising the weird stuff. Get rid of the compression, get rid of the banding, the stepped fall off to highlights, increase the resolution, use a sharp lens, an intra-frame codec, 24p, the list is almost endless!

Thankfully the Blackmagic Cinema Camera does exactly that – minimising the weird stuff and making the most of the good stuff.

The Blackmagic Cinema Camera produces the most organic, least electronic looking and most cinematic image ever outside of Arri, Red and Hollywood – and it costs just $3000. Extraordinary stuff.

Like the people at Arri, the design team at Blackmagic clearly knows what makes a cinematic image. None of the DSLRs have the same organic feel and lack of digital artefacts that the Blackmagic Cinema Camera is blessed with. Why can’t they give us that better codec? Why can’t they give us less moire, a finer grain of noise, more detail? Why can’t Canon, Sony and Nikon put high bitrates and intra-frame codecs in there? The answer is that they have carved up the market into consumer / pro and stills / motion. Blackmagic are not carving up the market, rather creating a niche for themselves. It just so happens that their niche is a lot better in terms of performance than the entire mass market put together.

It isn’t without pain though. The pain of waiting. The pain of investing in hardware to edit raw on. The decisions about rigging, batteries, SSDs. The pain of learning a raw workflow, of transcoding, rendering, of getting to grips with the admittedly very intuitive DaVinci Resolve for the first time. Actually pain isn’t the right word. I’ve enjoyed all of this process, it has been an adventure – but some may not.

If you don’t have a selection of lenses to suit the sensor size or lens mount that could be a considerable extra investment too and a selection headache. It won’t be worth the artistic gain in image to some.

This camera and especially future versions have the potential to challenge the big players in cinema cameras and DSLRs – but in my view the sensor supplier should be changed in light of ‘dirty glass’ issues and CMOS updated to Super 35mm size. Aptina (supplier of the only good part of the Nikon J1 mirrorless camera) do some interesting CMOS sensors and custom designs, it would be great to see Blackmagic partner with suppliers who can help them mass produce this camera without diluting the raw power it wields in terms of the image.

If I were to sum up the Blackmagic Cinema Camera it would not be a DSLR killer or Canon C300 competitor. It is a completely new class of camera. It is a baby Arri Alexa. And there’s no higher artistic praise to bestow on a piece of camera hardware than that.


Pros
  • Cinematic overall output
  • Under the price of a ready to shoot Scarlet it beats everything for resolution & dynamic range including Canon C300
  • Film like noise grain
  • Much more latitude in the highlights than a DSLR
  • Black detail can be pulled up more than on the FS100 and DSLRs
  • Very high build quality with no plastic used at all (rubber and metal)
  • DaVinci Resolve is superb editing package and colourist’s dream
  • Responsive in-camera playback of raw
  • Responsive touch-screen and user interface
  • Thunderbolt and HD-SDI, no wobbly HDMI. Robust SSD port and card door
  • Large screen negates need to use external monitor or EVF in many situations
  • Straight forward and minimalist approach to design of both software and hardware
  • Superb battery life with external battery solution, internal battery useful to have as a back-up
  • Affordable media
  • Affordable raw editing with correct GPU on a PC
  • The camera has ‘soul’ unlike many mass produced products


Cons
  • Potentially large extra investment in lenses, hardware, etc. for some shooters
  • No built in ND filter
  • No Super 35mm sensor size
  • No HDMI port for lower end external monitor / EVF options
  • No global shutter mode, rolling shutter not the best
  • Cinema DNG raw not as space efficient as GoPro CineForm compressed raw
  • No 2.5K recording option other than raw (2400 x 1350 80Mbit Intra-frame H.264 would be nice option for those who only do minimal grading)
  • Screen not articulated (difficult to see from low angle when camera is above eye-level)
  • Narrow viewing angle of LCD panel compared to DSLRs (polarises quite easily)
  • Poor screen visibility in strong sun light
  • Electronic aperture control on EF lenses is fiddly – should be two buttons or a jog dial
  • Final packaging issues – debris inside the lens mount on some cameras shipped so far
  • Fluff and debris tends to cling to rubber on rear of camera and cannot easily be wiped clean

Source: EOSHD

University of Bath Unveils Vector-Based Video Codec

The development of a vector-based video codec by the University of Bath points unerringly at the death of the pixel within the next five years, but the year-old founding group behind this project -- Bath, Root6 Technology, Smoke & Mirrors and Ovation Data Services – know that they cannot achieve that killing target alone.

“We need to get many more companies involved to help us accelerate,” says Philip Willis, professor of Computing and director of the Centre for Digital Entertainment. “When this first phase is rolled out we will have two prongs – one is to get further funding for the additional research needed, but also right now to get the current stuff out there and to get people understanding the benefits of it.

“We need to get the core technology working. But there are application areas out there – for example on the web and on tablets and mobile phones - where we don’t have cover yet,” he adds. “Mainly we are working directly with post production and indeed technology companies like Root6, who support that kind of world. But we need to expand beyond that.

“The world is not standing still. There are different interests pulling in different ways, and unless we involve a good cross section of manufacturers and broadcasters we are not going to make any longer term impact,” he continues.

What precisely did the university merit patents for? “So far it is the prior art before this project started – so that we could draw a line under that and work out what we are doing beyond that,” says Willis. “That core patent covers the whole business of representing images, contours, and methods – for generating those contours and then turning them back into video. It is so that you have got the complete codec as a piece of software.”


 
Click to enlarge


The technology that turned the wheel and brought vectors back into fashion is Vectorised Photographic Images (VPI) developed by Professor Willis and his long time collaborator Dr John Patterson (formerly of Glasgow University, and now a visiting senior research fellow at the University of Bath). They first worked together (1992-5) on the ANIMAX project, and presented a joint paper on VPI at CVMP in 2009.

With vectorised images, the problem has always been how to fill in between the contours, but the Bath team has finally solved this with ‘double diffusion’. Another key factor is VSV (Vectorised Streaming Video), the video form of VPI. VSV is at its very beginning and the main problem currently is formidable processing times.

For offline format conversion work this is not much of an issue, but realtime is a way off. However, VSV is readily ‘parallelisable’ and with enough cores and enough threads it could eventually be produced in realtime.

VPI and VSV are called ‘resolution-independent’ formats because images are modeled in continuous and not discrete terms. Users can sample as often as they wish and at as high a colour depth as required, all from the same image format.

“The method that we’ve got is independent of bit depth. We can go to whatever the industry needs. There is nothing that is hard-wired into that,” Willis says.

Asked about a future standardisation effort, he comments: “We do that when we have a critical mass of companies on board, because in this world most of the standards are either de facto ones driven by the companies themselves, or they go through the standards process because there is an industry group behind them.”

Willis has a full-time programmer who is creating the preliminary codec, and the project team has been working with video content supplied by Smoke & Mirrors so test results can be shown at the CVMP (Conference for Visual Media Production) in December (Vue Cinema).

Asked to predict the death date for pixels, Willis says: “That’s tempting fate. First of all, we need to do further work on some of the high level operations that you can do with these images. What we have at the moment is a codec. The industry needs to manipulate images, and we need to develop the technology that we can move on to them for doing that manipulation. And that’s when they can become seriously interested across the whole spectrum of things that the technology can address.”





Vectors: The Post Frontier
Root6 Technology is the bridge between the University of Bath academics and end users, its first tasks being to build a processing pipeline, and create a commercial application for the core vectorisation technology (codec). Was more sophistication required when compared to facilitating other codecs?

“Not necessarily. The idea is generally to try and wrap it such that it works in similar ways to other codecs, but we have been looking at improving the bit depth support of our existing pipeline up to 16 at least,” says Head of Software Development Nick Ridley.

“I am excited in terms of the fact that this is a completely different way of thinking about everything. There is a lot of work to do, but it is very much a future technology,” he adds.

“Our involvement in the project is pretty much ‘well there is no point having this thing unless you can actually demonstrate it,’ so we are looking forward to demonstrating the codec,” says Root6 MD Marcus Hume-Humphreys. “There are at least a couple of applications where this technology will really change the way people work. One of those, still an enormous nightmare, is in frame rate conversion.

“Having this technology available, as a mastering format from which to derive things without a process of standards conversion, is something that could potentially save Smoke & Mirrors and hundreds of other post houses an enormous amount of time,” he adds.

“With the prototype codec we think we are ready to produce some media for the golden eyes in the industry,” he continues. “We will have working demonstrations within the next 3-6 months. This is not a gradual migration. This will be kind of flick the switch. If we get it right it should see the way for the future for quite some time. But it is not something that is going to happen overnight.”

Smoke & Mirrors CTO Mark Wilding sees Bath’s codec as, “A wicked pie.” It is spatial image resolution that he wants to exploit. “How much money have post houses spent trying to do image recognition with pixel-based frames? Possibly millions.

“It is not because we are not developing the right software. It is because we’ve got the wrong underlying technology. If we had a camera that recorded vectors onto a hard drive it would then give us all the image recognition stuff. It would all come out in the wash.

“It needs turning on its head. The money they are putting into trying to find patterns in pixels should be spent on developing vectors. The pixel-based products we need in post production have stagnated. We cannot do anything more to the pixel. This could be the real shake up.

“Imagine Autodesk Flame that worked in vectors, and all the amazing stuff we could start doing. It is just revolutionary. So bring it on,” he continues. ”We want to see the death of the pixel.”

By George Jarrett, TVBEurope

Plugin-Free Video Chat via WebRTC Arrives in Chrome and Firefox

Millions of consumers will soon have access to the open real-time communications framework WebRTC, enabling them to do video calls in their browsers without the need for any additional plugin. Google added WebRTC to Chrome this week, and Mozilla included it in Firefox pre-beta builds.

WebRTC, the real-time communication framework that enables voice and video chat in the browser without the need for any plug-ins, is becoming more widely available to consumers. This week’s release of Chrome 23 comes with WebRTC on board, according to a post published Tuesday on the WebRTC blog. It reads, in part:

“It’s the biggest milestone yet… web developers can now offer Chrome users the ability to have live, high quality audio and video communication as part of their web experience.”

Google has been a big champion of WebRTC, open sourcing key components of the technology and more recently adding it to the beta version of its Chrome web browser. By adding WebRTC to the stable version of Chrome, which will be downloaded by millions of consumers, Google signaled that the technology is getting ready for prime time.

WebRTC also got another boost this week when Mozilla announced that it started to include the framework in the nightly and Aurora (pre-beta) builds of its Firefox web browser. And the technology got some real-world validation when the freshly-acquired video chat platform provider Tokbox released OpenTok on WebRTC, enabling developers to build WebRTC-based video chat applications that connect users on supported browsers with consumers using iOS devices. And web video chat provider Bistri added to the momentum by rolling out WebRTC-based video calling.

However, WebRTC still has some challenges to overcome before it becomes a universally adopted real-time standard. Among the challenges is the selection of codecs. Google and Firefox would love to see WebM become the default codec of browser-based video communication, but Microsoft is favoring a different approach that would leave it up to the individual developer to choose a codec, and Apple has been completely absent from the table.

By Janko Roettgers, GigaOM

The Case for a Common DRM Framework

It is possible to overstate the complexity of multi-screen video, but the absolute number and types of display devices are indeed increasing, which means that efforts to promote standards and greater simplicity address a live concern. A current initiative, playing out within the MPEG-DASH Industry Forum, among other places, to enable Digital Rights Management (DRM) interoperability is a case in point.

First, however, a few words about complexity. As underscored in several recent stories, Apple’s iPad continues to dominate the second-screen. Pay TV operators can meet much of the multi-screen demand simply by delivering to that one device, which stands atop a sort of multi-screen pecking order.

“In order to deploy Over-The-Top (OTT) services quickly, operators have strategically prioritized the CE devices that they wish to support, based on consumer popularity and market adoption,” writes Steve Tranter, VP at NDS (now part of Cisco) in a paper delivered during a TV Everywhere session at the SCTE Cable-Tec Expo in Orlando. “The most popular being the Apple iPad, followed by PCs, Android tablets and smart phones, game consoles and connected TVs.”

But even delivering to a single device would not eliminate complications. “Not only do you have iOS and Android fragmentation,” said Albert Lai, Innovation Architect for Media and Entertainment at Brightcove, during another session in Orlando, “but within iOS, there is fragmentation within the devices, and within Android, much more.” Thus the countervailing efforts to promote common platforms: HTML5 in the case of Brightcove; and a common DRM framework, in the case of NDS (Cisco) and others.

Building his case for a common downloadable DRM framework that is independent of but compatible with CE devices of all shapes and sizes, Tranter names three standards that could play a foundational role, namely:

  • Simulcrypt — the long-standing DVB protocol published by ETSI used to enable multiple key management systems;
  • MPEG-DASH — Dynamic Adaptive Streaming over HTTP (DASH), which became an ISO standard in late 2011;
  • UltraViolet — an authentication and cloud-based rights system deployed over the past few years by a consortium of studios, manufacturers and service providers.
Enhancements to Simulcrypt that Tranter believes would advance this cause include multiple encoder algorithms; in-band delivery that would move beyond proprietary manifest mechanisms; forensic watermark and key fingerprinting insertion; periodic key rotation on linear channels; and separate encryption keys for different bitrates. As for MPEG-DASH, he notes that while it provides a standard way for DRM systems to exchange encryption keys, it does not define how a system acquires decryption keys or distributes licenses. Finally, while Tranter leans toward local rather than centralized storage of licenses, which UltraViolet promotes, he sees value in the consortium’s common license format.

The end goal is giving Pay TV operators more control. “This is the key thing: You’re not relying on Apple or other devices to upgrade their security systems,” Tranter said during the session. “You can drive your own service portfolio.”

Reached for comment, Robin Wilson, VP Business Development at Nagra, seconded this effort to build bridges within the DRM ecosystem. “There is work well underway in the MPEG-DASH Industry Forum to come up with something analogous to Simulcrypt,” he said. The idea, proposed in one instance by Nagra and being discussed in one of the Forum’s sub-groups, is to refresh the aging standard with updates and extensions to enable that the lowest level of key that works on any given stream is shared between DRM systems.

Wilson said that UltraViolet is not trying to invent anything, but is looking at MPEG-DASH as the underlying standard, which could bring with it this ongoing work in interoperability. “But all is not totally rosy (with MPEG-DASH), because it may become too challenging to make all DRM or all file formats work together,” Wilson said.

Given enduring differences in implementation, it is likely that two branches—or what Wilson called two “half” or “partner” standards—will emerge from the overarching MPEG-DASH project. In terms of DRM, he said that even though the schemes falling within MPEG-DASH use AES encryption and 128-bit keys, they use different AES modes, which would make it hard to share keys and re-purpose streams.

“But at least the licensing and key servers will work together,” Wilson said.

So where does the 800-pound gorilla stand on these efforts? “Apple is helping around the sidelines on MPEG-DASH Industry Forum; they’re not driving it,” Wilson said. “They have their own vertically integrated ecosystem, so there’s no strong desire for them to go outside for a different DRM.”

“On the other hand, they are trying to help make HLS (HTTP Live Streaming) compatible with MPEG-DASH.” he said. “There is quite a lot of commonality between MPEG-DASH and how HLS works.”

By Jonathan Tombes, Videonet