COGNITUS AI media and production suite technologies
COGNITUS technology relies on AI algorithms leveraging machine learning and deep learning in a number of key components including video enhancement of audience/user generated footage to broadcast quality, semantic enrichment for efficient discovery of content by producers and Quality of Experience (QoE) enhancement in video delivery. The video super-resolution module utilises an end-to-end trainable convolutional neural network (CNN) in an hour glass structure, designed to up-scale mobile-uploaded video content to a higher resolution up to Ultra-High Definition (UHD). Given calls for crowd-contributions of videos could potentially flood the broadcaster with UGC videos, COGNITUS also includes automated means based on machine learning to facilitate production teams in swiftly finding videos of particular interest to their programme making utilising smart search tools supported by enriched metadata. Other AI algorithms used in COGNITUS include the QoE prediction and enhancement of video delivery using a deep CNN, taking into account compression distortion as well as transmission delays. The HEVC video encoder (Turing codec) used in video delivery is also optimised using CNN based rate control algorithms to predict the distortion and the number of bits needed to encode a frame.
COGNITUS technical innovations have been recognised on a European level through Innovation Radar. See below for an overview of the key innovations:
Visual QoE Metrics
The volume of UGC and social-media video is enormous. However, only a small portion are high-quality or broadcast quality. That is where the Visual Quality of Experience (QoE) metrics step in to automatically assess quality.
The Visual QoE metrics application can measure the quality of media in terms of its visual aesthetics and attention. These metrics are closer to the human perception of media than MSE or PSNR metrics. Thus, aesthetic and attention metrics is a fundamental tool for the professionals to automatically filter out unacceptable quality content.
The COGNITUS content adaptation component is capable of enhancing non-UHD content (including UGC and archived material) and creating UHD compliant content, which can be utilised in broadcasting.
This component targets the standard UHD format characteristic in videos and after assessing the content quality, creates videos with high spatial resolution, high frame rate, high dynamic range and compliant with BT.2020 colour space.
The immersive sound innovation aims to create a realistic and immersive reproduction of the original soundscape of an event captured via the devices of multiple spectators.
The immersive sound module utilises audio features to detect and synchronise overlapping content from the same event. It performs audio de-noising and enhancing based on the overlapping content.
Social Media Metadata
COGNITUS can utilise social media platforms to enrich metadata of UGC.
COGNITUS offers a metadata enrichment technique that queries social media and reads metadata from the results. The data is used for semantic enrichment, indexing and search as well as media synchronization. This can therefore facilitate swift content discovery.
User Reward Mechanism
COGNITUS offers a user reward mechanism to incentivise contributions.
COGNITUS provides incentives to motivate users of online communities to provide high quality contributions. This can contribute to the creation of a vibrant and sustainable community and is critical for online communities that depend on user content contributions.
COGNITUS Video Codec
A high efficiency video codec designed to provide high quality of experience to users in a variety of conditions is offered.
It supports high compression of Ultra-High Definition (UHD), High Dynamic Range (HDR) and User Generated Content (UGC) and provides low complexity of the encoding necessary for distribution and streaming.