This article discusses the current state of online video, delves into the DASH standard, explores the challenges of building a DASH player, and, finally, walks through the basics of implementing the open source Dash.js player.
An interesting white paper by David Austerberry.
An interesting guide about x265, an open source HEVC implementation.
Tuesday, November 12, 2013
Quiptel, a five-year-old start-up company, claims it has an online media platform that improves the user experience of streaming audio and video while making more efficient use of available network bandwidth. It is a bold claim, which acting chief executive Richard Baker tested at a low key launch in a London hotel.
Invited media and analysts were shown multiple high-definition streams over a modest broadband connection. It appeared to work well in the demonstration, with rapid media start up a notable feature, said to be three to six times faster than conventional online video approaches, although they had problems earlier in the day when the hotel apparently lost its network access.
The patented technology appears to be based on using multiple logical network routes and a network overlay that intelligently manages traffic to optimise use of available access network bandwidth.
The result, Quiptel claims, is that more of the available network capacity is used to deliver sound and pictures, while operators can serve more customers with equivalent infrastructure.
Quiptel says its approach means an operator can deliver up to 30% more streams than HTTP Live Streaming for the same capacity. Some of the claims are documented in a detailed technical white paper that benchmarks QMP QFlow against Apple HLS. It says that HLS involves more overhead bandwidth above the bitrate for the audio and video.
However, given the increasing capacity of connections and falling connectivity costs, the assumed savings may be less significant than potential improvements to quality of service, particularly over constrained connections or in congested network conditions.
That concept of adaptive bitrate streaming has been around since the turn of the century. Many online video services currently use adaptive bitrate streaming, which breaks files into chunks and allows the player to change stream quality dynamically to maintain continuity in changing network conditions
What Quiptel appears to be doing is adding multipath connections to optimise delivery over diverse network routes.
It is an approach that is familiar to informitv from its groundbreaking work with Livestation, a pioneer of live peer to peer streaming.
In principle it can provide more robust delivery in changing network conditions. In theory that is inherent in internet protocols, but so-called overlay networks can add more intelligent routing that can optimise distribution dynamically.
One of the challenges is that requires more intelligence to be built into the player application. Quiptel has clients for multiple operating systems including personal computers and iOS or Android devices and they can also be embedded in smart televisions, set-top boxes or media players. Quiptel showed an end-to-end system using an Android set-top box.
Quiptel is aiming to offer QMP, the Quiptel Media Platform, to service providers on a white label basis or as a licensed technology.
The company is based in Hong Kong. The founder is Peter Do, who previously worked with voice over internet protocol systems. Just as VOIP services, of which Skype, now owned by Microsoft is the best known, have disrupted traditional telephony by enabling audio and video over broadband networks, so Quiptel hopes to allow a premium media experience over either managed or unmanaged networks.
“It provides a greater than 50% capex and opex saving for service providers over traditional IPTV systems, enabling a quicker time to market while expressly focusing on quality of service video delivery.” he said.
QMP includes various components, known as QFlow, QNav and QRouter, based on patented core technologies and intelligent network optimising traffic management specifically designed for delivering high quality video across managed and unmanaged mobile and broadband networks across multiple devices.
Quiptel faces a challenge in deploying its approach with service providers that have already made technology choices but could provide an advantage to those looking to roll out new services.
Monday, November 04, 2013
An interesting article by Nicolas Weil.
Wednesday, October 02, 2013
Labels: MPEG DASH
The EBU has published the first release of its QC Criteria (EBU Tech 3363) developed by its Strategic Programme on Quality Control. EBU Tech 3363 is a large collection of QC checks that can be applied to file-based audiovisual content.
Examples include the detection of visual test patterns, loudness level compliance checks, colour gamut verification, looking for image sequences that may trigger epileptic seizures, etc. This collection of QC tests can be seen as a 'shopping list' which media professionals can use to, for example, create their own delivery specifications or establish test protocols for archive transfer projects.
Each QC Criterion in the list features a definition, references, tolerances, and an example, to help users reproduce the same tests with potentially different equipment (e.g. think of a broadcaster who receives material from a wide range of post production houses). Obviously such information can get quite technically detailed.
That is why the EBU group decided to present the overview of QC Criteria in a tabular format, similar to the well-known Periodic Table of chemical elements. Several characteristics support this metaphor, including the concept of specifying the tests as 'atomically' as possible, the fact that some tests posess (much) more complex properties than others and the idea of categorizing them into groups with similar characteristics.
For the EBU QC Criteria these are: audio, video, format/bitstream and metadata/other.
Monday, September 30, 2013
Labels: Quality Control
We are on the verge of an important inflection point for the Web. In the next few years commercial web video delivery utilizing new, international standards (DASH Media Ecosystem) will become commonplace. These standards will enable cross-platform, interoperable media applications and will transform the media entertainment industry, delight consumers and expand the nature of the Web.
Although all of the standards outlined below are necessary, the most significant change was the introduction of interoperable digital rights management technologies which enable the distribution of digital media on the open web while respecting the rights of content producers.
Download the white paper
Via Video Breakthroughs
This white paper has been commissioned by the Digital Production Partnership (DPP) to provide an assessment of the applicability of cloud technology and services to broadcast processes and key media business areas.
Thursday, September 19, 2013
Labels: Cloud Media Services
This report provides an overview of image and audio technology standards and requirements for UHDTV production in the professional broadcast domain. This report represents a SMPTE study primarily focused on real time broadcasting and distribution and is therefore not an exhaustive analysis of UHDTV1 and UHDTV2.
Click here for the report
Thursday, September 19, 2013
Version 2.0 of the DASH-AVC/264 guidelines, with support for 1080p video and multichannel audio, is now publicly available on the DASH Industry Forum (IF) website.
The new guidelines includes several promised extensions, including one on HD video that moves the recommended baseline from 720p to 1080p.
720p had initially been chosen, according to the initial guidelines released in May, as a "tradeoff between content availability, support in existing devices and compression efficiency." At that time, the baseline video support used the Progressive High Profile Level 3.1 decoder and supported up to 1280x720p at 30 fps.
"The choice for HD extensions up to 1920x1080p and 30 fps is H.264 (AVC) Progressive 12 High Profile Level 4.0 decoder," the new guidelines state, adding support for 4.0 decoders that was lacking in the previous set of guidelines.
In addition, the guidelines also provide a way to handle standard definition (SD) content.
"It is recognized that certain clients may only be capable to operate with H.264/AVC Main Profile," the guidelines state. "Therefore content authors may provide and signal a specific subset of DASH-AVC/264 by providing a dedicated interoperability identifier referring to a standard definition presentation. This interoperability point is defined as DASH-AVC/264 SD."
The new guidelines also cover several multichannel audio options.
"The baseline 1.0 version of DASH-AVC/264 only required support for HE-AACv2 stereo," says Will Law, secretary of DASH IF and Chairman of its Promotions Working Group. "Version 2.0 introduces multichannel Dolby, DTS and also Fraunhofer profiles."
Law also says that there will be a number of DASH-AVC demonstrations around the at IBC at Amsterdam's RAI Convention Centre on September 12, 2013. "These demonstrations will show the latest advancements in the DASH workflow, from encoding, through delivery and playback, including 4K video, HEVC and multichannel audio," says Law. "You'll also see HbbTV and multi-screen applications as well as solutions for DASH use in the broadcast world."
These demonstrations will occur at various booths, including Akamai—the company where Law works as a Principal Architect for Media—Ericsson, Haivision, Microsfot, Nagra, and a host of others.
The official version will be launched soon at the DASH IF site but until then the Digital Primates demo can be found on their site. The demo requires Chrome or Internet Explorer 11 (IE); PlayReady DRM playback is currently only available with IE for this demo.
As the DASH IF points out, DASH-AVC/264 "does not intend to specify a full end-to-end DRM system" but it does provide a framework for multiple DRMs to protect DASH content. The guidelines allow the additional of "instructions or Protection System Specific, proprietary information in predetermined locations to DASH content" that has previously been encrypted with what's generally known as the Common Encryption Scheme (ISO/IEC 23001-7).
By Tim Siglin, StreamingMedia
Telestream has announced the public availability of an open source H.265 (HEVC) encoder. The new project aims to create the world’s most efficient, highest quality H.265 codec.
The iniative is being introduced under both an open source and commercial license model and is being managed by co-founder MulticoreWare Inc, Telestream’s development partner.
“Telestream and MulticoreWare have had great success in the acceleration and commercial deployment of x264 and believe that a similar approach with the collaborative development of the next generation of high-efficiency codecs will benefit the industry,” commented Shawn Carnahan, CTO at Telestream. “The x264 project proved the effectiveness of developing a codec of this complexity. Leveraging the x264 technology in this new project will ensure that the new codec is as robust, efficient and high quality as its predecessor.”
Jason Garrett-Glaser, lead developer of the x264 project added: "Previous collaboration between Telestream and MulticoreWare led to successful work on the GPU acceleration of x264, a task deemed by many to be incredibly difficult, if not impossible. With these accomplishments in mind, I am excited to support Telestream in the founding of the x265 project, which follows in the x264 tradition of high performance, quality, and flexibility under an open source license and business model."
Access is free under GNU LGPL licensing, and commercial licenses are available for companies wishing to use the resulting implementation in their products. More information can be found at x265.org, where companies and individuals can contribute to the project.
SCRATCH Play supports a wide range of media formats. From cinematic RAW files (RED, Arri, Sony, Canon, Phantom, etc) to DSLR RAW files (Canon 5D, Nikon N600, etc) to editorial formats (MXF, WAV, etc) to pro VFX/still formats (DPX, EXR, etc). Even web-based media (QuickTime, Windows Media, MP4, H.264, etc) and still image formats (TIFF, JPG, PNG, etc).
SCRATCH Play features powerful color-correction tools that let you set looks on-set, and generate LUTs, CDLs or JPEG snapshots. This is the same color technology found in SCRATCH - ASSIMILATE’s world-class professional color-grading application.
SCRATCH Play is not just any media player. Sure, it plays virtually anything, but it also supports features only found in professional applications including camera metadata display, clip framing, rotating and resizing.
Download the FREE version:
For Mac OS X
Wednesday, September 04, 2013
Labels: Media Players
A confluence of technologies, evolving business models and changing consumer lifestyles are converging to propel the rise of online video and fundamentally transform TV, advertising and content delivery methods.
Here are the online video ecosystem segments and companies that are giving rise to this transformation.
Tuesday, September 03, 2013
Labels: OTT TV
This document details work carried out by BBC R&D for the EBU Project Group on Future Storage Systems (FSS).
Media storage is still expensive and very specialist, with a few suppliers providing high performance network storage solutions to the industry. Specifying, selecting and configuring storage is very complex, with technical decisions having far reaching cost and performance implications.
The viability of true file based production will not be determined by storage performance alone.How applications, networks and protocols behave is fundamental to getting the best performance out of network storage.
This paper details the experience gained from testing two different approaches to high performance network storage and examines the key issues that determine performance on a generic Ethernet network.
All the graphs shown were produced from measured performance data using the BBC R&D Media Storage Meter open source test tool.
Wednesday, August 28, 2013
Broadcasters have many challenges as the media business evolves, driven by new consumer devices and the increase in mobile viewing. National broadcasters are facing more competition from global operators. New entrants like YouTube and Netflix have changed the VOD landscape. Broadcasters that once aired one channel now air a multiplex of linear channels, as well provide catch-up and mobile services. Summing up, broadcasters must deliver to more platforms, linear and on-demand, in a more competitive business environment.
To meet these challenges, a business must become more agile. Many other sectors have faced similar challenges, and part of the solution for many has been to turn to new software applications, particularly Business Process Management (BPM) and the Service-Oriented Architecture (SOA). Although each can be used stand-alone, BPM and SOA are frequently used in concert as a platform to improve the performance of a business.
Operations that use videotape were constrained by the need for manual handling, but as content migrates from videotape to digital files, the way is open to use IT-based methodologies, including BPM and SOA, to aid broadcast operations.
What is SOA?
SOA is a design methodology for software systems. SOA is not a product, but an architecture to deploy loosely coupled software systems to implement the processes that deliver a business workflow. SOA provides a more viable architecture to build large and complex systems because it is a better fit to the way human activity itself is managed — by delegation. SOA has its roots in object-oriented software and component-based programming.
In the context of the media and entertainment sector, SOA can be used to implement a “media factory,” processing content from the production phase through to multi-platform delivery.
Legacy Broadcast Systems
Traditional broadcast systems comprise processing silos coupled by real-time SDI connections, file transfer and an assortment of control protocols. Such systems are optimized for a specific application and may provide a good price/performance ratio with high efficiency.
The tight coupling of legacy systems makes it difficult to upgrade or replace one or more components. These applications are typically coupled via proprietary APIs, as shown in Figure 1. If the software is upgraded to a new version, the API can change, necessitating changes to other applications that are using the API — work that is usually custom.
It makes it difficult to extend to add new functionality to the system to meet the ever-changing demands of multi-platform delivery and evolving codec standards. Storage architectures change with object-based and cloud storage to become alternates to on-premise NAS and SAN arrays.
When vendors upgrade products, the new versions often do not support legacy operating systems, leading to the need to replace underlying computer hardware platforms.
Traditional systems are just not agile enough to easily support the new demands of the media business. Often new multi-platform systems are tacked on to existing linear playout systems in an ad-hoc manner to support an immediate demand. The system grows in a way that eventually becomes difficult to maintain and operate.
Monitoring — No Overall Visibility
Traditional systems also suffer from a lack of visibility of the internal processes. Individual processes may display the status on a local user interface, but it is difficult to obtain an overall view (dashboard) of the operation of the business.
As broadcast strives for more efficiency, it is vital to have an overall view of technical operations as an aid to manage existing systems and guide future investment. Many broadcasters already have end-to-end alarm monitoring, but resource usage may only be monitored for billing purposes, and not to gain intimate knowledge of hardware and software utilization.
SOA in Media
SOA is not new; it has been in use for a decade or more in other sectors including defense, pharmaceuticals, banking and insurance. It developed from the principles of object-oriented software design and distributed processing.
If SOA is common in other sectors, why not just buy a system from a middleware provider? The problem lies with the special nature of media operations. The media sector has lagged other sectors in the adoption of such systems for a number of reasons. These include the sheer size of media objects and the duration of some processes. A query for an online airline reservation may take a minute at most; a transcode of a movie can take several hours. Conventional SOA implementations are not well suited to handling such long-running processes.
What is a Service?
A service is a mechanism to provide access to a capability. A transcoding application could expose its capability to transcode files as a transform service. Examples of services in the broadcast domain include ingest, transform, playout, file moves and file archiving. The service is defined at the business level rather than the detailed technical level.
It could be said that many broadcasters already operate service-oriented systems; They just don’t extend the methodology to the architecture of technical systems.
Services share a formal contract; now service contracts are commonplace in broadcasting and across the M&E sector, with companies calling on each other for capabilities such as playout, subtitling and effects, as examples. The service-level agreement for playout will include quality aspects such as permitted downtime (99.999 percent). Service contracts operate at the business level, and ultimately may result in monetary exchange.
The business management logic may call for a file to be transcoded from in-house mezzanine to YouTube delivery format, but not define the specifics of a particular make and model of transcoder or the detail of the file formats. This abstracts the business logic from the underlying technical platforms. A generic service interface for file transform can be defined, and then each transcoder is wrapped by a service adaptor that handles the complexity of the transcode process. To the business logic, the transcode is simply a job. The abstraction of the capability is a key principle of the SOA.
In a legacy system, the ingest job is delegated to an operator who configures an encoder, and then starts and stops the encoding at the appropriate times. The operator is functioning autonomously during the processes of the job. These concepts of delegation and autonomy are key to the SOA design philosophy. The encoding may well be automated as a computer process, but the principles remain the same.
Because the service is abstracted, it opens the way for broadcasters to leverage cloud services more easily. As an example, at times of peak transcode demand, a cloud transcode service could be used to supplement in-house resources. With a standard service interface for transcoding, the implementation can be an on-premise or cloud-based service. The operation of the services is orchestrated by a layer of middleware, software that manages business processes according to the needs of the business.
A transform service can be used for different business processes. For example, a transcoder could be used to transform files at ingest to the house codec or used to create multiple versions of content for multi-platform delivery. The transform services can be redeployed to different departments as the needs of the file traffic change from hour to hour.
Planning for a SOA
Migrating from traditional tightly coupled systems to use SOA principles is a big step for a media business. The efficient operation of SOA requires detailed analysis of business needs and definition of services. It also requires rigorous planning of the IT infrastructure, computers and networks for efficient operation of the services. Without the involvement of senior management down to IT services, the benefits of SOA are unlikely to be fully realized.
For broadcasters used to running departmental silos, many with real-time elements, the move to SOA will be a radical change to the way the business operates. However, the advantages of the SOA and allied systems like BPM are proving attractive propositions for the broadcaster — or service provider — running complex file-based operations for multi-platform delivery.
The problems facing a media company looking to embrace SOA and BPM include change management and the sheer problems of keeping on-air through such huge changes in the technical infrastructure supporting broadcast operations.
Many media companies have embraced the architecture, with early adopters using considerable original development of such components as service adapters — the vital link between a service like transcoding and the workflow orchestration middleware.
The use of consultants or internal software services to build a media SOA will achieve the goal, but does it make sense for all media businesses to go their own way?
It was this issue on which the Advanced Media Workflow Association (AMWA) and the European Broadcasting Union (EBU) independently agreed when, in 2010, they decided to pool resources and set up the joint Framework for Interoperable Media Services (FIMS) Project, which would develop standards for a framework to implement a media-friendly SOA.
The road will be long, and many obstacles remain to be resolved, but the success of this project will benefit both vendors and media companies in the long run.
The FIMS solution aims to provide a flexible and cost-effective solution that is reliable and future-proof. It should allow best-of-breed content processing products to be integrated with media business systems.
The FIMS team released V1.0 in 2012 as an EBU specification, Tech 3356. Three service interfaces have been specified: transform, transfer and capture.
The FIMS Project has expanded on the conventional SOA with additional features to meet the needs of media operations. Specifically FIMS adds asynchronous operation, resource management, a media bus and security.
Asynchronous operation allows for long-running services. A transcode may take hours; conventional SOA implementations allow for processes that complete in seconds or minutes.
Although services are loosely coupled to the orchestration, jobs can still be run with time constraints. This may be simply to start a job at a certain time, but services can also be real time, like the capture and playout of streams. In these cases, the job requests for the service will also include start and stop times for the process. For playout, this concept is no different from a playlist or schedule.
SOA typically is based on an Enterprise Service Bus (ESB) that carries XML messages between service providers and consumers. A media bus provides a parallel bus to the ESB to carry the large media essence files. Many file-based operations will already have media IP networks that can be adapted to provide the platform for the media bus, as shown in Figure 2.
Software methodologies like SOA and BPM help media businesses manage file-based operations in more efficient ways and better serve the needs of multi-platform delivery. They provide a holistic approach to running business operations that provide better visibility of operations and simpler ways to leverage cloud services.
They have proved successful in other sectors and are ready to meet the unique needs of the media sector.
By David Austerberry, Broadcast Engineering
In the OS landscape, Android is by far the most widely used for tablets and mobile devices. With over 900 million current activated users and approximately 70% market share in the space, the platform is not only the most popular, but it’s also the most fragmented in terms of OEM’s and OS versions out there.
Android History and Origins
Android is a Java-Based operating system used for mobile phones and tablet computers. It was introduced and developed initially by Android, with financial backing from Google. Google acquired Android in 2005. Google announced their mobile plans for Android in 2007 and the first iteration of Android hit the shelves in 2008. Android is the world’s most widely distributed and installed mobile OS. However, in terms of application usage and video consumption, iOS devices lead the way. A consistent user experience and standardized video playback are two of the reasons for this.
Performance of Video on Android
When running video on Android devices, the experience varies from OEM to OS version to media source. Because of this lack of standardization with video, we wanted to give an overall look at the top mobile devices running Android to determine how video is delivered and how it performs across a few key sites and platforms.
We tested the following top devices running the most used versions of Android (2.3, 4.0, 4.1 or 4.2):
- Google Nexus 7
- Google Nexus 4
- Samsung Galaxy 4
- HTC One
- Samsung Galaxy II
- HTC EVO 4G
On the devices running older versions of Android, the experience was inconsistent across sites and the video performance wasn’t as strong. Some of the top video sites showed the option for either video display on a player or in the web browser, and some had very poor viewing capabilities for video.
A sample look at the different variations of video transfer and display on the Android devices is below:
Open Source on Android
Because Android is open source, Google releases the code under the Apache License. For this reason, every OEM modifies the open source code for their devices.
OEM’s create their own coding and specifications for Android for each device. This makes any standardization very difficult. When testing different versions of Android on different target devices, there are a lot of inconsistencies.
Google regularly releases updates for Android which further confuses things. End users often do not upgrade, either because they don’t know how, or because their device does not support the new release. The scattered consistency of updates further confuses any efforts at standardization.
Two of the largest and most widely used Android OEM’s both released their latest open source codes earlier this year. For the Samsung Galaxy codes please click here. For the HTC One click here.
Versions of Android
In 2008 Android v1.0 was released to consumers. Starting in 2009, Android started using dessert and confection code names which were released in alphabetical order: Cupcake, Donut, Eclair, Froyo, Gingerbread, Honeycomb, Ice Cream Sandwich, and the latest, Jelly Bean.
A historical look at the Android devices is below:
In terms of market share, the Gingerbread remains the most popular.
Top Android Devices per OS version
The top devices running Android in terms of both sales and popularity come from various OEM’s, with the majority from Samsung, HTC, LG and Asus. A few of the top devices from the most widely used Android OS versions are as follows:
DRM Content on Android
Android offers a DRM framework for all devices running their 3.0 OS and higher. Along with their DRM framework, they offer consistent DRM for all devices using Google’s Widevine DRM (free on all compatible Android devices) which is built on top of their framework. On all devices running 3.0 and higher, the Widevine plugin is integrated with the Android DRM framework to protect content and credentials. However, the content protection depends on the OEM device capabilities. The plug in provides licensing, safe distribution and protected playback of media content.
The image below shows how the framework and Widevine work together.
Closed Captions on Android
As developers know, closed captioning is not a simple “feature” of video that can be simply activated. There are a number of formats, standards, and approaches and it’s especially challenging for multiscreen publishers. On Android devices, closed captioning varies from app to app. However, any device using Jelly Bean 4.1 or higher can use their media player which supports internal and external subtitles. Click here for more information.
For any device using the Gingerbread version or lower which do not have any support for rendering subtitle, you can either add subtitle support yourself or integrate a third party solution.
Most larger broadcasters pushing content to OTT devices now serve closed captioning on Android (Hulu Plus, HBO GO, and Max Go to name a few).
Does Android support HLS?
Android has limited support for HLS (Apple’s HTTP Live streaming protocol), and device support is not the same from one version or one device to the next. Android devices before 4.x (Gingerbread or Honeycomb), do not support HLS. Android tried to support HLS with Android 3.0, but excessive buffering often caused streams to crash. Devices running Android 4.x and above will support HLS, but there are still inconsistencies and problems.
Best Practices for Video on Android
For deploying video on Android, there are several suggested specifications to follow. Below is a list of files supported by Android devices. Developers can also use media codecs either provided by any Android-powered device, or additional media codecs developed by third-party companies. If you want to play videos on Android, find a multi-format video player or convert videos to Android compatible formats using an encoding company.
Video Specifications for Android
Below are the recommended encoding parameters for Android video from the Android developer homepage. Any video with these parameters are playable on Android phones.
Video Encoding Recommendations
This table below lists examples of video encoding profiles and parameters that the Android media framework supports for playback. In addition to these encoding parameter recommendations, a device’s available video recording profiles can be used as a proxy for media playback capabilities. These profiles can be inspected using the CamcorderProfile class, which is available since API level 8.
For video content that is streamed over HTTP or RTSP, there are additional requirements:
- For 3GPP and MPEG-4 containers, the moov atom must precede any mdat atoms, but must succeed the ftypatom.
- For 3GPP, MPEG-4, and WebM containers, audio and video samples corresponding to the same time offset may be no more than 500 KB apart. To minimize this audio/video drift, consider interleaving audio and video in smaller chunk sizes.
Cameras fitted with a new revolutionary sensor will soon be able to take clear and sharp photos in dim conditions, thanks to a new image sensor invented at Nanyang Technological University (NTU).
The new sensor made from graphene, is believed to be the first to be able to detect broad spectrum light, from the visible to mid-infrared, with high photoresponse or sensitivity. This means it is suitable for use in all types of cameras, including infrared cameras, traffic speed cameras, satellite imaging and more.
Not only is the graphene sensor 1,000 times more sensitive to light than current low-cost imaging sensors found in today’s compact cameras, it also uses 10 times less energy as it operates at lower voltages. When mass produced, graphene sensors are estimated to cost at least five times cheaper.
Graphene is a million times smaller than the thickest human hair (only one-atom thick) and is made of pure carbon atoms arranged in a honeycomb structure. It is known to have a high electrical conductivity among other properties such as durability and flexibility.
The inventor of the graphene sensor, Assistant Professor Wang Qijie, from NTU’s School of Electrical & Electronic Engineering, said it is believed to be the first time that a broad-spectrum, high photosensitive sensor has been developed using pure graphene.
His breakthrough, made by fabricating a graphene sheet into novel nano structures, was published in Nature Communications, a highly-rated research journal.
“We have shown that it is now possible to create cheap, sensitive and flexible photo sensors from graphene alone. We expect our innovation will have great impact not only on the consumer imaging industry, but also in satellite imaging and communication industries, as well as the mid-infrared applications,” said Asst Prof Wang, who also holds a joint appointment in NTU’s School of Physical and Mathematical Sciences.
“While designing this sensor, we have kept current manufacturing practices in mind. This means the industry can in principle continue producing camera sensors using the CMOS (complementary metal-oxide-semiconductor) process, which is the prevailing technology used by the majority of factories in the electronics industry. Therefore manufacturers can easily replace the current base material of photo sensors with our new nano-structured graphene material.”
If adopted by industry, Asst Prof Wang expects that cost of manufacturing imaging sensors to fall - eventually leading to cheaper cameras with longer battery life.
How the Graphene Nanostructure Works
Asst Prof Wang came up with an innovative idea to create nanostructures on graphene which will “trap” light-generated electron particles for a much longer time, resulting in a much stronger electric signal. Such electric signals can then be processed into an image, such as a photograph captured by a digital camera.
The “trapped electrons” is the key to achieving high photoresponse in graphene, which makes it far more effective than the normal CMOS or CCD (Charge-Coupled Device) image sensors, said Asst Prof Wang. Essentially, the stronger the electric signals generated, the clearer and sharper the photos.
“The performance of our graphene sensor can be further improved, such as the response speed, through nanostructure engineering of graphene, and preliminary results already verified the feasibility of our concept,” Asst Prof Wang added.
This research, costing about $200,000, is funded by the Nanyang Assistant Professorship start-up grant and supported partially by the Ministry of Education Tier 2 and 3 research grants.
Development of this sensor took Asst Prof Wang a total of 2 years to complete. His team consisted of two research fellows, Dr Zhang Yongzhe and Dr Li Xiaohui, and four doctoral students Liu Tao, Meng Bo, Liang Guozhen and Hu Xiaonan, from EEE, NTU. Two undergraduate students were also involved in this ground-breaking work.
Asst Prof Wang has filed a patent through NTU’s Nanyang Innovation and Enterprise Office for his invention. The next step is to work with industry collaborators to develop the graphene sensor into a commercial product.
Source: Nanyang Technological University
Tuesday, July 23, 2013
KaleidoCamera is developed by Alkhazur Manakov of Saarland University in Saarbrücken, Germany, and his colleagues. It attaches directly to the front of a normal digital SLR camera, and the camera's detachable lens is then fixed to the front of the KaleidoCamera.
After light passes through the lens, it enters the KaleidoCamera, which splits it into nine image beams according to the angle at which the light arrives. Each beam is filtered, before mirrors direct them onto the camera's sensor in a grid of separate images, which can be recombined however the photographer wishes.
This set-up allows users to have far more control over what type of light reaches the camera's sensor. Each filter could allow a single colour through, for example, then colours can be selected and recombined at will after the shot is taken, using software. Similarly, swapping in filters that mimic different aperture settings allows users to compose other-worldly images with high dynamic range in a single shot.
And because light beams are split up by the angle at which they arrive, each one contains information about how far objects in a scene are from the camera. With a slight tweak to its set-up, the prototype KaleidoCamera can capture this information, allowing photographers to refocus images after the photo has been taken.
Roarke Horstmeyer at the California Institute of Technology in Pasadena says the device could make digital SLR photos useful for a range of visual tasks that are normally difficult for computers, like distinguishing fresh fruit from rotten, or picking out objects from a similarly coloured background. "These sorts of tasks are essentially impossible when applying computer vision to conventional photos," says Horstmeyer.
The ability to focus images after taking them is already commercially available in the Lytro – a camera designed solely for that purpose. But while Lytro is a stand-alone device which costs roughly the same as an entry-level digital SLR, KaleidoCamera's inventors plan to turn their prototype into an add-on for any SLR camera.
Manakov will present the paper at the SIGGRAPH conference in Anaheim, California, this month. He says the team is working on miniaturising it, and that much of the prototype's current bulk simply makes it easier for the researchers to tweak it for new experiments.
"A considerable engineering effort will be required to downsize the add-on and increase image quality and effective resolution," says Yosuke Bando, a visiting scientist at the MIT Media Lab. "But it has potential to lead to exchangeable SLR lenses and cellphone add-ons."
In fact, there are already developments to bring post-snap refocusing to smartphone cameras, with California-based start-up Pelican aiming to release something next year.
"Being able to convert a standard digital SLR into a camera that captures multiple optical modes – and back again – could be a real game-changer," says Andrew Lumsdaine of Indiana University in Bloomington.
By Hal Hodson, New Scientist
Tuesday, July 23, 2013
Labels: Light Field Cameras