Testing FIMS 1.0

The rapid pace of change in the media industry demands that leading media organizations implement efficient and agile digital media processing capabilities to control costs and capitalize quickly on new business opportunities. However, achieving the full promise of file-based media workflows and the efficiencies they bring requires seamless interaction among products from different vendors.

Therein lies the challenge: Getting disparate products to interoperate is like getting a group of individuals each speaking a different language to understand each other. The effort requires standards for more than just the words or the dialogue — i.e. the file formats. Seamless interaction also requires standardizing the way the dialogues are handled — i.e. the approaches to coordinating and connecting the components of media processing systems.

Most modern businesses today leverage Service-Oriented Architectures (SOAs) to assemble agile business systems. Indeed, SOAs offer tremendous advantages to processing media for playout and distribution. A key element of SOA involves exposing system components as shared network-based services. Connections with services can then be created and reconfigured quickly to respond to changing requirements and demand.

The Framework for Interoperable Media Services (FIMS) was initiated as a joint project between the Advanced Media Workflow Association (AMWA) and the European Broadcasting Union (EBU) to create a common framework for applying SOA principles to processing media.

FIMS standardizes service interface definitions for implementing media-related operations in a SOA. As one example of how it can be used, a system that ingests and prepares content for playout and nonlinear delivery can leverage FIMS to interact flexibly with media capture, transfer and transform products from multiple vendors. Using FIMS, system components can easily be added, updated and removed in response to changing business requirements and demand.


The Problem
Although workflows are commonplace for moving, processing and storing media using software-centric systems deployed on commodity IT infrastructure, many software-based media processing systems suffer from problems endemic to older hardware and physical media-centric film and video processing systems.

Whenever unique media processing requirements must be addressed — and let’s face it, what broadcaster does not have a unique requirement of some sort — there’s an expensive custom software integration project needed to delivered bespoke hardwired media processing silos.

Worse yet, many software-based systems use watch folders to hand off media between components of the system. While watch systems can allow multivendor interoperability and loose coupling, they create a whole new set of quality, reliability and management problems.


The Approach
FIMS focuses on the abstract service layer that connects applications and orchestration systems with services that perform operations on media like capture, transform and transfer. FIMS addresses media specific requirements associated with the resource-intensive nature of processing and storing media.



 
FIMS reference model


For example, FIMS accommodates long-running processes by incorporating an asynchronous calling pattern and resource prioritization through queuing. The FIMS 1.0 specification defines services interfaces for capturing, transforming and transferring media, with ongoing development efforts under way to define additional services.

With FIMS, a workflow that receives media, transcodes it into the required formats and then transfers the result for linear playout and nonlinear distribution can easily be automated through standards-based service orchestration.

To those unfamiliar with SOA and Web services, the breadth of products and technologies marketed by big IT vendors can be intimidating and overshadow the underlying principles. The FIMS 1.0 specification avoids much of this complexity by defining a resource model as a central component of interaction between service providers and consumers.

The resource model approach places more emphasis on what is being communicated between the service consumer and provider than on how it is being communicated. This allows a neutral position on Web services technical debates such as those concerning SOAP versus Representation State Transfer (REST).

The main resources in the FIMS 1.0 model are Services, Jobs and Media Objects. Jobs are processed by Services to perform operations on Media Objects. In the context of the resource model, the Media Objects are formally called Business Media Objects. This qualification is made because these objects only contain the metadata relevant to the business operation being performed along with information about how to access actual media content.

FIMS 1.0 also makes a clear distinction between Media Objects on the message bus and the media bus. The term bus is borrowed from computer architecture where a bus is a subsystem that transfers data between components of a system. The message bus is used to communicate information about media, including media processing instructions. The media bus is used to store, access and move master, mezzanine and finished media products. This model is analogous to the management of physical products where business systems track raw materials, components and inventory for associated real-world objects that are stored and transported using warehouse, ships, planes and trucks.


FIMS Test Harness Project
A practical approach to standardization requires more than object models and interface definitions to gain serious traction. Consequently, the FIMS Technical Board recognized the need for a tool to help FIMS implementers validate their implementations.

Signiant did as well, given the test-driven development methodology we use to develop software. Using such an approach, developers create and code tests to validate an implementation before coding it. This leads to higher quality products that can be evolved quickly.

Signiant initiated the FIMS Test Harness Project within the FIMS Technical Board and helped design and build a test harness for contribution to the FIMS effort. The ultimate goal of the test harness is to remove barriers to FIMS adoption and promote implementation of FIMS-compliant interfaces.



 
FIMS test harness components

 
Through discussion among the technical board, it was agreed that the test harness should consist of a set of service consumer and service provider simulators. Doing so facilitates interaction between third-party products, which is a key benefit of FIMS.

The simulators allow FIMS implementers to independently test their service consumer or provider implementation. These independent tests give vendors confidence that their products will interact properly in real-world conditions.

Completely data-driven testing was another goal of the test harness project. As such, test cases and associated service behaviors are specified with test configuration data. Service consumer configurations specify a sequence of commands to invoke against a service provider. Service provider configuration specifies how the service responds to specific commands with mock behavior. This approach helps ensure that no coding is required to use the test harness and create new test scenarios.

In addition, the test harness project sought to leverage an existing Web services test framework. Broad availability of standards-based testing tools is a significant benefit of a standards-based SOA approach. Through a series of proposals and discussions in the technical board, the decision was made to use the soapUI test framework.

SoapUI is an open-source Web service testing application for SOAs. The functionality of soapUI covers Web service inspection, invoking, simulation and mocking, functional testing, and compliance testing. As such, the soapUI framework covers most of the service consumer behavior, and new code was only required to implement mock service provider behavior.

Despite its name, the soapUI framework supports both SOAP and RESTful interactions between service providers and consumers. SOAP is a remote procedure call (RPC) standards-based approach to Web services that references a stack of related standards. The REST approach is conceptually simpler than SOAP and uses standard HTTP operations to interact with resources.

FIMS 1.0 formally defines SOAP bindings for services and, although the resource-based approach allows for RESTful interactions, work is ongoing in the FIMS Technical Board to further specify a REST interface. The test harness design allows a REST interface to be easily plugged in on top of existing mock service implementations.

A first version of the FIMS test harness has been contributed to the FIMS initiative for free use by the FIMS community. The FIMS test harness is an easy way to gain practical experience with FIMS, including working with FIMS calling patterns and the FIMS resource model. It provides valuable feedback to both vendors implementing FIMS and users wanting to start working with FIMS. The FIMS community is encouraged to utilize and help expand the test harness through the implementation of additional test cases and behaviors.


Toward Lower Costs, Greater Agility
The full promise of file based workflows requires seamless interaction amongst products from multiple vendors. FIMS applies SOA principles to media to facilitate an important part of this interaction. The FIMS Test Harness Project provides a practical starting point for working with FIMS.

Adoption and implementation of FIMS brings the media industry a big step closer to file-based workflow’s promise of lowering costs and ultimately improving business agility.

By Ian Hamilton, Broadcast Engineering

Composition and Framing Tutorial




Source: LightsFilmSchool

The IP Revolution Reaches Playout

Television facilities are gradually evolving and are becoming more IP/networking-centric in many areas. IP networking infrastructure has been built up throughout facilities primarily to handle the exchange of files from one storage device or file-based system to another. Files have been successfully navigating the IP domain within television and playout facilities for some time.

IP networking makes perfect sense for file-based processes, but what about real-time program streams? There are many areas where real-time or linear program streams are in use. Today, we commonly find IP networking used for real-time program streams on the edges of the facility, typically in the incoming feeds area or at the point of outbound transmission. At these points, the stream is encoded and compressed for transport. Take that farther, and here is the killer question: Could IP networking be used throughout a facility?


 
In many areas today, real-time or linear program streams are used, resembling a model like this one.


Headends Take an IP plunge
Anyone unsure about the suitability of IP networking infrastructure to carry real-time program streams need only look to the radical transformation that has occurred in cable, DTH and IPTV headends in the past 10 years. In the beginning, these headends would accept a combination of compressed and uncompressed signals, and employed a combination of SDI and ASI infrastructures.

Now, these headends have transitioned to using IP networking infrastructures for several reasons. Namely, the signals they transport are now compressed when they arrive and remain so either all the way to the home or, for cable systems that still support analog tiers, are decoded close to the home where the signal is modulated to RF. But, why did TV delivery headends transition to IP?



 
Transition to IP infrastructure in headends.


With the functional integration of devices leading to the development and proliferation of compressed domain splicer systems, the transition to IP was facilitated. Today, traditional MPEG decoders and encoders are being replaced by integrated systems that feature IP inputs and outputs, and combine ad and graphics insertion and transcoding from one compression format to another, or from one bit rate to another, inside a single box.

With these integrated transcoder systems, a signal can be demodulated at the entrance of the facility but stay IP until it hits the integrated processing chain. Here, it is internally transcoded, while allowing processing and insertions to take place, thus allowing routing to remain in the IP realm. This integration has allowed the scalability to more channels. Also, it allows for ease of redundancy within a flexible, routable environment, compatible with telecom networking gear already in use there. Having switched the video infrastructure to IP, headends have the ability to share the same infrastructure for TV, telephony and data for Internet services.


Could IP Replace SDI?
In cable headends, the case was clear. But, what about a TV station or multichannel origination facility? Some in the industry, and many from outside the industry, think Ethernet/IP networking should be replacing SDI everywhere in a TV plant. Many think that the transition from SDI to Ethernet will happen gradually.

In considering this transition, we follow something of a common sense rule that states, “If the real time video signal is already encoded/compressed, then it is practical to carry it over an IP infrastructure. But, if you are inside a facility, and you have to encode a signal just to switch and distribute it within the facility, then it may be less practical or economical to use an IP infrastructure.”

Based on this rule, there are specific areas within a television production and origination facility where Ethernet/IP networking for real-time programs makes sense, and other areas where SDI/HD SDI makes more sense.



 
Use of compressed and uncompressed linear signal streams in a typical television broadcast/production facility.


Production
There are several reasons why an IP/Ethernet infrastructure may be less practical today in a live production environment. Many sources in a live production environment are not natively compressed (cameras, graphics, etc.), and there is a desire to keep them uncompressed to maximize quality and, more importantly, to avoid encode/decode delays.

One could consider leaving these sources uncompressed and routing them using Ethernet/IP equipment instead of SDI. But, the high bit rates associated with uncompressed video, and the large number of sources in a modern live production, make this impractical today.

The bit rates for uncompressed video are quite large, ranging from 270Mb/s for SD, 1.5Gb/s for HD and on up to 12Gb/s for 4K. Now, if we were only talking about a few uncompressed sources, then using IP/Ethernet equipment would be workable. But, it is common to have 1000 x 1000 matrices in a large production facility. And, some larger facilities are now at 2000 x 2000 matrices. The Ethernet networking gear currently available is just not economically viable at those rates and fabric sizes, but it will become more feasible as the price per 10GigE port decreases.



 
Typical bit rates for uncompressed video can be large, ranging from 270Mb/s to 12Gb/s.


IP’s Next Use in a Facility
Based on our rule that if signals are compressed, then it is practical to use IP infrastructure, we think and IP networking infrastructure begins to make a lot of sense in the playout area. This is the area where automation systems, playout servers, branding, subtitle insertion and delivery encoding combine together to deliver real-time program streams to increasingly diverse distribution platforms. Consider a typical multichannel playout system as it exists today in an SDI-centric world and a more IP-centric approach.



 
A multichannel playout system in an SDI-centric world.


Why Migrate Playout to IP?
Three main motivational factors influence migration to an IP infrastructure in playout. The first factor is integration. In the past, a playout chain consisted of many discrete devices each interconnected by SDI and supported by SDI routing to allow reassignment of resources and redundancy protection.

Today, broadcasters, particularly multichannel broadcasters, use integrated playout systems, often called "channels in a box", that combine most of the functionality of a channel chain in a single device, typically software on a standard computer server.

These integrated systems accept one or two inputs for live programming but perform all other master control functions inside the box. The output of the integrated channel box typically is a delivery-ready real-time stream on SDI. A few vendors of these integrated channel boxes now provide an option for an encoded (compressed) output over IP.

The second factor enabling IP in playout is that multiple “deliverable” bit rates are becoming required as facilities need to feed secondary outputs for OTT formats along with primary outputs, such as main distribution systems using, for example, ATSC or DVB-T for terrestrial transmission or DVB-S for satellite transmission.

Where a standalone encoder is used now to compress the channel’s stream for one stream, in the future, encoders will be replaced by transcoders that support IP inputs and outputs, and provide multiple delivery formats. Like in cable headends, these IP in/IP out transcoders enable the transition to IP infrastructure.

From a topology perspective, there is often a separation between playout and uplink or delivery functions. With the availability of IP connectivity between those points, it creates a natural place to start migrating the interconnect between playout and uplink to IP.

Within this context, it is now possible and sensible to link the integrated playout device to the transmission transcoding devices using an IP infrastructure. The output of the channel chain is encoded within the playout chain and delivered to the transmission transcoding device over IP. But, rather than have the playout device provide a final delivery grade encode, it provides a higher bit rate mezzanine encode.


Mezzanine Encoding
A mezzanine encode is a low-compression encode resulting is an intermediate bit rate. The ideal mezzanine rate is high enough to maintain maximum quality for multigeneration transcoding. At the same time, the rate should be low enough for efficient, cost-effective transportation around the facility and be transcoded easily to multiple output formats for playout distribution.

Preferably, it would be a common format used for recording the original video, which is good enough for editing. One example would be Sony XDCAM-HD. It is common and supports good-quality 4:2:2, 8-bit compression, and it is easy and economical to encode. XDCAM HD is also compatible with MXF, allowing carriage of multiple audio tracks and SMPTE 436M ancillary data. In the context of the playout application, the mezzanine encoding technique is sensible because it keeps encoding in the integrated playout device simple and provides a maximum base quality for the transcoder to work with.


Benefits and Challenges
The benefits of an IP networking structure are abundant. First, the flexibility and absolute routability of all signals is simplified. All source signals are wrapped up and compatible with all destinations. We are able to create a truly redundant architecture, which allows protection against points of failure. We are able make use of generic IP switches, which are present already in many facilities and easy to acquire. Since the switches support IP inputs and outputs, they are easy to integrate with other IP systems, such as subtitle insertion devices.

Monitoring can be accomplished for many points with standard IP monitoring tools. Creating an IP infrastructure may eventually allow us to virtualize the playout device as a software application in a virtual machine, allowing several instances to run simultaneously and further enhance scalability.

There also will be challenges moving to an IP-based playout infrastructure, similar to those faced with the TV delivery transition to an IP infrastructure. If we think of the structure physically, it will become much more complex. Before, it was simple when one wire carried one signal to one port. Now, with potentially several signals per wire, routing becomes less obvious. Where exactly is each signal? Can I just take this wire and plug it in elsewhere? No. There is not a simple way to just patch around a point of failure because it is no longer a single signal. If there is a failure, it is difficult to fix, meaning it fails big time.



 
Challenges exist when moving to an IP-based infrastructure like this one.


Luckily, IP networks have the ability to provide excellent redundancy to prevent a potential massive failure. IP network topologies can ease the creation of truly redundant architectures, which prevent many catastrophes. By simply combining two channel chains along with two switches, every routable path becomes redundant. A failure in one chain or one switch will not prevent the signal passage. Actually, we could lose both a switch and a channel chain and still be OK. This redundancy allows the broken node to be serviced without disrupting normal operations.

Managing signal routing also becomes more complex. With SDI routing, it is common to find simple router control panels that allow operators with basic training to quickly make changes to signal routing. In an IP routing environment, routing is typically a system administration task. Tools will have to be developed to make setting of routes simpler and more operational.


Conclusion
In the future, as IP/networking technology evolves, Ethernet port speeds increase and port costs decrease, IP migration will naturally move to other areas. As technology becomes increasingly IP-centric, following this path will leave facilities prepared for whatever follows. Before then, however, now is the time to become more IP-centric and knowledgeable.

By Sara Kudrle and Michel Proulx, Broadcast Engineering

The State of Streaming Media Protocols 2013

Protocols are a geeky thing, almost up there with metadata or IP address schemes. Without protocols, there would be no concept of a webpage, email delivery, or even VoIP (voice or video over IP). In fact, there would be no streaming of any flavor.

With that in mind, let's take a quick look at the latest developments in a few key protocols, some of which are just coming into widespread use for streaming.

On Time or On Target? The UDP Question
You may have heard the truism that "time is money." We often use that concept when we talk about hardware- or software-based transcoding: If a job needs to be done quickly, even faster than real time, go for the hardware; if time is less critical, use software.

The same concept, slightly morphed, can be applied to the world of streaming protocols: If low latency is key, go with UDP (User Datagram Protocol), but if delivery guarantee is more critical, go with TCP (Transmission Control Protocol). The latter provides control over guaranteed packet delivery -- the control in the transmission control protocol name -- while the former does not.

Every other protocol that we discuss in this article will hinge on TCP, but recent advances on the UDP front bear mention.

I'm not going to retrace step by step the key points that Dom Robinson travels in his recent article titled Reliable UDP (RUDP): The Next Big Streaming Protocol?, but I do want to point out several highlights.

First, in and of itself UDP is unreliable, but it's fast and efficient. The protocol itself has no mechanism to guarantee delivery or request missing packets, as does TCP. A properly constructed application, though, can act as the first line of defense in detecting loss or packet corruption with subsequent request for dropped packets.

"It can take TCP upward of 3 seconds to renegotiate for the sequence to restart from the missing point," wrote Robinson, "discarding all the subsequent data, which must be requeued to be sent again. Just one lost packet can cause an entire ‘window' of TCP data to be re-sent."

Second, UDP can work hand-in-hand with error-correction techniques. Many legacy intermittent networks, including those based on asynchronous transfer protocols such as ATM, used Forward Error Correction (FEC) to anticipate intermittent outages. FEC provides "packet flooding" at a certain percentage above 100% of packets -- be it 15%, 20%, 30% -- to then allow the client application to reconstruct the missing or corrupt packets without the need to request that packets be retransmitted.

Third, the concept of reliable UDP has been around for quite some time and can be accomplished with a number of open source tools, but there are also a number of commercial licensees that offer tools based on RUDP.

Your mileage may vary if you want to tinker under the hood of a freeware application such as UDP Data Transfer, or you may just want to reach out to a vendor in the RUDP space. Regardless of your choice, there are enough RUDP options out there to warrant research, especially if very low latencies and FEC are part of your workflow.

HTTP Über Alles
The fanfare around Dynamic Adaptive Streaming over HTTP, or DASH for short, continues to build at a rate we'd only ever seen back in the early streaming days of Real versus Microsoft. DASH was ratified back in late 2011, but its adoption has moved at a rapid pace.

Let's take a look at the HTTP flavors out there in current circulation: HDS, HLS, MPEG DASH, and Smooth Streaming. We'll look first at DASH followed, in alphabetical order, by the Adobe, Apple, and Microsoft proprietary offerings.

DASH It All, or Just 264?
Due to DASH's ability to deliver any of several types of files -- from the fragmented MP4 version of ISO Base Media File Format (ISOBMFF) to the Apple-modified MPEG2 Transport Stream (M2TS) -- the DASH specification reads like an encyclopedia of encoding, encryption, and delivery technologies.

To offset the potential problem that plagued MPEG 4 system -- a wide-ranging specification, based on Apple QuickTime, that was too complex to easily implement -- the industry has begun looking at options and commonalities of all the major HTTP delivery solutions. It has, as of the time of this writing, begun drafting a subset specification that centers on H.264 served as fragmented MP4. This use of ISOBMFF and H.264 is being dubbed as DASH 264.

What impact the DASH 264 specification has on the potential of bringing Apple into the DASH fold -- it was a contributor to the DASH specification, but is the only M2TS-based HTTP solution -- remains to be seen. But several industry players we spoke to stated the strong need to get solid DASH implementations into the field in early 2013.

Adobe's Flash vs. DASH Conundrum
While Adobe wouldn't publicly come out in support of DASH, despite co-sponsoring an ISOBMFF white paper in late 2011, the company did put its weight behind DASH in late February 2012.

"I am excited to announce that Adobe's video solutions will adopt the emerging video standard, MPEG-DASH across our video streaming, playback, protection and monetization technologies," wrote Kevin Towes in an Adobe blog post. "Adobe will support MPEG-DASH ISOBFF on demand and live profiles which are recommended in DASH-264 recommendation."

Towes anticipated the question many would ask: What about Flash and the Adobe HTTP Dynamic Streaming (HDS) protocol? He noted that Adobe will continue to push forward with HDS, even as Adobe supports DASH.

"Adobe will continue developing its HDS format used to deliver high quality, protected video experiences across multiple devices," wrote Towes, noting that the DASH profile for ISOBMFF "is similar to Adobe's HDS format and supports many of the performance objectives of the HDS format."

The continued development of HDS makes sense, as it offers Adobe the chance to try out new functionality within the confines of the Flash Player, but as we move into 2013, we wonder whether this is a sustainable model.

Adobe MAX 2013, to be held in May, may shed some light on HDS -- and perhaps even RTMP -- but meanwhile Adobe continues to show DASH functionality in the Flash Player.

An Apple (Protocol) a Day
Much has been made of the idea that HTTP Live Streaming (HLS) is a standard in the marketplace, but as one panelist at a recent DASH event quipped, "The IETF drafts of the Pantos spec are more a suggestion than a standard."

The Pantos spec, as it is known in the industry, is a series of working drafts for HLS submitted by two Apple employees as an information draft for the Internet Engineering Task Force. As of the time of this article, the Pantos spec is currently at informational version 10.

Much has changed between the early versions and the most recent v10 draft, but one constant remains: HLS is based on the MPEG-2 Transport Streams (M2TS), has been in use for almost 2 decades, and is deployed widely for varied broadcast and physical media delivery solutions.

In that time frame, however, little has changed for basic M2TS transport stream capabilities. For instance, M2TS still lacks an integrated solution for Digital Rights Management (DRM). As such, all HLS versions cannot use "plain vanilla" M2TS, and even the modified M2TS used by Apple lacks timed-text or closed-captioning features found in more recent fragmented elementary stream streaming formats.

Yet Apple has been making strides in addressing the shortcomings of both M2TS and the early versions of HLS: In recent drafts, the HLS informational draft allows for the use of elementary streams, which are segmented at the time of demand rather than beforehand. This use of elementary streams means that one Achilles' heel of HLS -- the need to store thousands, tens of thousands, or hundreds of thousands of small segments of long-form streaming content -- is now eliminated.

Google, with its Android mobile operating system platform, has adopted HLS for Android OS 4. Some enterprising companies have even gone back and created HLS playback for earlier versions of Android OS-based devices.

Smooth Streaming Ahead?
No discussion of HTTP protocol-based streaming delivery would be complete without a mention of Microsoft Smooth Streaming. After all, the Protected Interoperable File Format (PIFF) is the basis for the Common File Format (CFF) that is being used for UltraViolet, and the Common Encryption Scheme (CES) is based partly on Microsoft's 2008 idea that common encryptions and encoding could be implemented for use around the fragmented MP4 standard.

As we enter 2013, rationalization of PIFF-CFF-CES and the upcoming Common Streaming Format (CSF) will continue to pare down the number of options, which is a good thing if we are to get back to the business of creating content and delivering it to anyone who wants to view it.

Yet Microsoft isn't resting on its laurels, as the company announced in late 2012 that it would be supporting Smooth Streaming via the Open Source Media Framework (OSMF) that is part of Adobe's Strobe initiative for Flash. Yes, you heard that right: Not only does Flash Player support DASH in beta (thanks to Adobe), but it now supports Smooth Streaming (thanks to Microsoft).

Is RTSP Dead?
One of the most venerable streaming protocols, the Real-Time Streaming Protocol, has been implemented natively into every type of device: set-top boxes, smartphones, tablets, and PCs. Yet these implementations are often fraught with buggy code, limited support, and a number of oddities.

In testing performed on a number of Android OS devices, we have been surprised to find that RTSP-based video playback -- served from a standards-based server -- could not be played with built-in applications and services, despite the requirement for the base Android OS to be able to play this content. Content that would play consistently on numerous RTSP implementations would stop dead in its tracks on other devices.

This was true of any standards-based streaming protocol in the late 1990s, but these days, consumers just expect their content to stream with limited buffering and at varying data rates. RTSP offers neither of these as certainty, but is quite inexpensive to implement, so we suspect that it will be with us for at least a few more years before retiring to greener pastures to make way for DASH and more recent streaming protocols.

Where Does RTMP Fit?
At a recent informational meeting I attended with a major software company, a slide that was shown caught my attention, more for the lack of what was shown than for what the slide contained.

This particular slide listed the typical agnosticism -- codec, protocol, player -- of their soon-to-be-released update, and it had the regular litany of compatibilities. Yet what caught my attention was that the sparsest, by far, was the protocol list.

Only two protocols were noted -- HTTP and RTMP -- so I asked why RTMP was listed when other non-HTTP protocols were not. To summarize their response, they said RTSP and other non-HTTP protocols weren't being requested at all, but RTMP was still a valid industry solution.

Part of the reason lies in the fact that RTMP is "true" streaming with very low latencies and session "statefulness" that can't yet be found in HTTP-based delivery. In addition, RTMP is firmly entrenched on the vast majority of devices -- with the exception of the iOS devices -- thanks to the inclusion of the Flash Player on handsets, tablets, mobile devices, and PCs.

Yet, for all that entrenchment, as we've noted previously, Adobe continues to lean toward the HTTP delivery model in all the important ways including monetization functionality. We're not ruling out RTMP, but we do understand that the scalability and interoperability of HTTP solutions such as MPEG DASH and HLS offer compelling reasons to surf the fine streaming waves coming out of Apache servers everywhere.

Conclusion
So what does 2013 hold? We think the year is DASH's to lose.

Once DASH is officially supported in Flash, we see the possibility that DASH will be firmly enough entrenched to begin "hockey stick" growth for online video delivery. If DASH 264 can be implemented as quickly as it appears it will be ratified, and if there is some form of rationalization between HLS and DASH, including the ability to include Apple's DRM scheme in the Common Encryption Scheme, we might just note 2013 not only as the beginning of true online video delivery growth but also as the point at which cable and satellite providers begin to pay attention to delivery to all devices -- including set-top boxes -- for a true TV Everywhere experience.

By Tim Siglin, StreamingMedia

Mobility and OTT will Drive Early H.265 Adoption

Ratification of the H.265 (High Efficiency Video Coding) standard by the ITU (International Telecommunications Union) late January clears the way for its adoption as the anointed successor to H.264/MPEG4.

Henceforth to be known as H.265, the questions now are how quickly will it ripple through the market, and when we will arrive at an all H.265 world?

The answer is that H.265 will have a quite different adoption profile from H.264, being slower to be taken up by makers of traditional managed set top boxes, but much quicker for consumer devices.

When H.264 was at a similar stage a decade ago after its ratification in June 2003, there were in any case hardly any consumer devices for consumption of video content other than TVs and DVD players. This is the first big difference today, the existence of smartphones and tablets with a frenetic pace of product innovation and release that has vendors such as Apple eating their own lunch every six months or so. They will be the early adopters of H.265, which will be standard in tablets and smartphones by the end of 2014 if not a bit sooner.

By contrast, we will then be seeing only the first adoption of H.265 by the set tops based on chips that were announced by Broadcom at last month’s Consumer Electronics Show (CES) in Las Vegas.

The eagerness by the CE makers to adopt H.265 is driven by two markets, mobility and fixed line OTT, although with a different flavor in each case. For mobility bandwidth reduction is the primary driver as proliferating video threatens to bring cellular infrastructures to their knees. This will remain the case even though Cisco has just revised downwards its forecasts for mobile data volumes up till 2017 from the giddy numbers that had been projected.

So, although mobile data can and increasingly will be offloaded via WiFi onto broadband networks, even there the ability of H.265 to halve the bandwidth compared with H.264 will yield big cost savings. Over time, deployment of H.265 will also allow video quality to improve over mobile networks, but the emphasis in the short term at least will be on bandwidth reduction for video at existing resolutions.

In the case of fixed line OTT, it is the either way around, with quality in the driving seat. There, H.265 is seen as a way of delivering HD services over limited bandwidth, and I can see little reason why operators will not seize the opportunity. There is also an interesting IPTV angle, where the motive could be to increase the range rather than the quality, extending the existing service to consumers previously too far from the nearest exchange to obtain adequate QoS.

By halving the bandwidth required for a given quality it may be possible almost to double the distance. Even for those operators that currently deliver IPTV exclusively over Fiber to the Home, enabling multichannel HD services, H.265 may open up the possibility of reaching more customers via VDSL2 over copper, at distances up to 1 Km from the nearest fiber end point.

The other main driver for H.265 is ultra HD or 4K, and here, things get interesting. I have to admit to being among those who saw H.265 as inadequate for 4K transmission, given that it will generate 8X as many bits per second as most current 720p or 1080i HD services. But, that was before I looked more closely at the specification.

Although H.265 is touted as being about twice the efficiency of H.264, it will do far better than that for 4K. The reason is simply that at those high resolutions there will be even more scope for intra frame compression, because any area of the picture that is all a similar color can be represented in virtually the same number of bits irrespective of the pixel density.

For example, in the case of a sporting event where there is a lot of grass in the picture, that region can effectively be encoded with the same number of bits within ultra HD as standard definition, which means that the bit-rate reduction is correspondingly greater for the former. H.265 has been designed with this in mind through its support for larger blocks of pixels up to 64 x 64 than the 8x8 typically used in current H.264 codecs.

H.265 will also be able to exploit parallel processing, which will be employed in future codecs, by dividing the picture in the first instance into tiles that can be encoded independently of each other, giving further potential for efficient compression of ultra HD. For this reason I think that current estimates that H.265 will enable delivery over the Internet at bit rates between 20 Mbps and 30 Mbps, as against 45 Mbps for H.264, are too conservative. It looks now that H.265 could achieve a three or even fourfold improvement over H.264 for ultra HD, which means that 15 Mbps should certainly be achievable.

Even so, we are not going to be seeing widespread 4K services anytime soon, remembering also that it only really scores for TVs bigger than 50 inches and even then only given the right viewing distance. Although there is still debate over the relationship between screen size, distance and the resolution beyond which there is no discernible improvement in quality, it is clear that for several years ultra HD will be a niche market driven by the same cutting edge operators that currently offer 1080p60 HD.

It also looks unlikely there will be the same wholesale stampede to H.265 among set-top box makers that there was to H.264 But, in the case of tablets and smartphones, the stampede will be all the more notable.

By Philip Hunter, Broadcast Engineering Blog

Docomo Demos HEVC (H.265) New Video Coding Standard




Source: DigInfo TV

Visual SyncAR Brings TV Content into your Living Room

Visual SyncAR, under development by NTT, uses digital watermarking technology to display companion content on a second screen, in sync with the content being viewed on the TV.



Source: DigInfo TV

Synthetic Double-Helix Faithfully Stores Shakespeare's Sonnets

A team of scientists has produced a truly concise anthology of verse by encoding all 154 of Shakespeare’s sonnets in DNA. The researchers say that their technique could easily be scaled up to store all of the data in the world.

Along with the sonnets, the team encoded a 26-second audio clip from Martin Luther King’s famous “I have a dream" speech, a copy of James Watson and Francis Crick’s classic paper on the structure of DNA, a photo of the researchers' institute and a file that describes how the data were converted. The researchers reported their results on Nature’s website.

The project, led by Nick Goldman of the European Bioinformatics Institute (EBI) at Hinxton, UK, marks another step towards using nucleic acids as a practical way of storing information — one that is more compact and durable than current media such as hard disks or magnetic tape.

“I think it’s a really important milestone,” says George Church a molecular geneticist at Harvard Medical School in Boston, Massachusetts, who encoded a draft of his latest book in DNA last year. “We have a real field now.”

DNA packs information into much less space than other media. For example, CERN, the European particle-physics lab near Geneva, currently stores around 90 petabytes of data on some 100 tape drives. Goldman’s method could fit all of those data into 41 grams of DNA.

This information should last for millennia under cold, dry and dark conditions, says Goldman, as is evident from the recovery of readable DNA from long-extinct animals. “The experiment was done 60,000 years ago when a mammoth died and lay there in the ice,” he says. “And those weren’t even carefully prepared samples.”

And whereas current media such as cassette tapes or compact discs become obsolete as soon as their respective players are replaced by new technology, scientists will always want to read and study DNA, Goldman says. Sequencers might change, but you can “stick the DNA in a cave in Norway for a thousand years and we’ll still be able to read it”. This creates enormous savings for archivists, who will not have to keep buying new equipment to rewrite their archives in the latest formats.

Data Capture
Goldman’s team encoded 5.2 million bits of information into DNA, roughly the same amount as Church’s team did. But Church’s team used a simple code, where the DNA bases adenine or cytosine represented zeroes, and guanine or thymine represented ones. This sometimes led to long stretches of the same letter, which is hard for sequencing machines to read and led to errors.

Goldman’s group developed a more complex cipher in which every byte — a string of eight ones or zeroes — is represented by a word of five letters that are each A, C, G or T. To try to limit errors further, the team broke the DNA code into overlapping strings, each one 117 letters long with indexing information to show where it belongs in the overall code. The system encodes the data in partially overlapping strings, in such a way that any errors on one string can be cross-checked against three other strings.

Agilent Technologies in Santa Clara, California, synthesized the strings and shipped them back to the researchers, who were able to reconstruct all of the files with 100% accuracy.

The promise of extending DNA storage is largely hampered by the high cost of writing and reading DNA. The EBI team estimates that it costs around $12,400 to encode every megabyte of data, and $220 to read it back. However, these costs are falling exponentially. The technique could soon be feasible for archives that need to be maintained long term, but that will rarely be accessed, such as CERN’s data.

If costs fall by 100-fold in ten years, the technique could be cost-effective if you want to store data for at least 50 years. And Church says that these estimates may be too pessimistic, as “the cost of reading and writing DNA has changed by a million-fold in the past nine years, which is unheard of even in electronics”.

Goldman adds that DNA storage should be apocalypse-proof. After a hypothetical global disaster, future generations might eventually find the stores and be able to read them. “They’d quickly notice that this isn’t DNA like anything they’ve seen,” says Goldman. “There are no repeats, and everything is the same length. It’s obviously not from a bacterium or a human. Maybe it’s worth investigating.”

By Ed Yong, Nature