Media Services: Services Architecture Enables Functional Flexibility

Unfortunately, the term “media services” contains two overloaded terms — “media” and “services.” In this case, when we talk about media services, we are talking about small (some would say atomic), network-based applications “services” that perform simple, focused tasks.

These tasks are somehow related to either essence or metadata used by professional broadcasters, post-production facilities and the film industry.

An example of a “media service” might be a service sitting out on a network that is available to transcode content from one popular video format to another. One can imagine a host of services, including tape ingest, QC and file movement. Each of these services is available out on the network, and can be used to perform a discrete unit of work. Services perform higher functions by grouping a number of atomic services together in a logical way. But, at their core, media services are small, discrete pieces of software that can be combined in different ways to perform work.

This is a significant departure from traditional media infrastructures, where an ingest station consists of tape machines, routers, monitors and other hardware — all hard-wired together to perform a specific function. In fact, entire broadcast chains are built this way. They are highly optimized and efficient, but they can be very difficult to change. And, if one thing is certain these days, it is that change is a permanent part of our business.

A media services architecture allows discrete blocks of functionality to be combined to build complex workflows. As workflows change, blocks can be recombined into modified workflows. If new functionality is added, new services can be deployed. Additionally, discrete services may be used in multiple workflows. So, a transcoder may be deployed in a post-production scenario for one job and then redeployed in a conversion for web applications next.

Building Workflows
When using media services, it is not enough that the services are available out on the network; something must consume those services in order to perform valuable work for the organization. There are several approaches to using services, but, for this article, we are going to focus on two of them — orchestration and event-driven architecture.

Orchestration systems sit on top of media services and use media services to move work through a defined pipeline from start to finish. For example, an orchestration system might have a workflow that ingests a tape, transcodes the content and then saves the file on a large central server.

The orchestration system tracks the progress of the workflow, calling on various services to work on the job as it moves through the pipeline. The orchestration system is responsible for not only dealing with normal flows, but it is also responsible for dealing with error conditions such as a failed transcode. Orchestration can start out simple, but it can become complicated as engineers consider all of the various states and error conditions possible in the workflow.

Event-driven Architecture
Event-driven architecture is another way to use services to perform work. At a high level, in this architecture, something causes something else to happen (an event) that is of significance to the business. Processing engines can be set up to listen for that event, and when the event happens, they can perform actions based on the event. Other processing engines can be listening down-stream, and when one event engine finishes, others can be triggered. In an event-driven architecture, there is no central system guiding the flow of work through a pipeline. The movement of work through the facility is caused by a sequence of events and processes.

In an event-driven architecture, the movement of work through the facility
is caused by a sequence of events and processes.

An operator finishing an ingest activity might create an “Ingest Complete” event. Event processing engines subscribe to event channels, such as the “Ingest Complete” event channel. This particular event-processing engine might have two actions: the first is to notify the QC operator that the file is available for quality control checking; and the second is to publish an “Ingest Complete” notification for other systems that might be interested in the event, such as traffic and automation systems. Both of these systems might update the status of the media based on the “Ingest Complete” event.

Note that, in this example, it would be extremely easy to add another event process engine to the event channel. This engine might be responsible for creating a number of different formats from the original ingested file format. Adding this process does not require modifying a workflow in a central system. All one has to do is to subscribe the transcoding engine to that particular event channel.

It is important to realize that orchestration and event-driven architecture are complementary, and they are frequently deployed together. For example, in the earlier referenced figure, the tape ingest function might be driven by an orchestration system that precisely controls workflow and error-handling conditions. Event-driven architecture would be used to notify other SOA processes once the ingest is complete.

Common Approach
Common service interface definitions are critical. One can imagine a whole universe of services: a content repository service, a media identification service, a publish content to ISP service, and so on. And, one can imagine that several different vendors would make such services available. If each vendor defined the interface to their service independently, the amount of software integration required to build these systems would be huge. On the other hand, if the industry would agree on the service interface definition for an ingest service, for example, then it would be possible to integrate various ingest services into a workflow with minimal additional development.

Common service interface definitions are critical, but it is also critical that we have a common overall framework within which services can be deployed. How do services communicate with orchestration systems and with each other in event-driven architectures? How do newly commissioned services make their presence known on a network? Again, having a harmonized approach to an overall media service architecture will lower costs and shorten implementation time.

One last critical element in the discussion of media services is governance. Governance brings logic and structure to media services. Areas typically in governance include: service life cycle (how services are developed, deployed, deprecated and eventually decommissioned); helping to prioritize the deployment of new services; and how the quality of deployed services can be ensured.

There is a task force in the industry called the Framework for Interoperable Media Services (FIMS). FIMS is a collaboration between the Advanced Media Workflow Association and the European Broadcasting Union. FIMS is the first industry effort focused on developing services for the professional media industry. The FIMS group consists of a Business Board that develops business priorities for service development, and a Technical Board that oversees the development and deployment of FIMS services.

You can learn more about FIMS at For technical information, you can visit the FIMS wiki at This activity has already yielded an overall framework for media services and several specific media service definitions. Work is ongoing, all the work is public, and anyone can participate.

By Brad Gilmer, Broadcast Engineering