Architecture and Workflows

Functional architecture

The COGNITUS platform is built on a service-oriented architecture, consisting of several video and audio enhancement components, semantic analysis and quality assessment tools, as well as flow management and storage solutions. The heterogeneous nature of these components (regarding implementation technologies, inter-communication, timing etc.) and the need for high efficiency, led to a modular and decoupled architecture, easily scalable to address the high demands of uploading and enhancing the contributed User Generated Content (UGC). The components comprising the COGNITUS platform can be logically grouped into five functional categories: (i) Core components, (ii) Data management, (iii) Processing, (iv) Content streaming, and (v) User applications.

COGNITUS users’ applications

A basic objective of COGNITUS is to provide the necessary functionality to enable the voluntary contribution of User Generated Videos (UGV), captured by users that have participated in public events (e.g., festivals, sport games, etc.). The gathered content is enhanced and transposed to UHD videos, which are then made available to professionals. Using the COGNITUS platform, professionals can find and combine the high quality versions of the available UGV into a single plot, which is related to an event, and then make it available for broadcasting. COGNITUS users can upload their videos to the platform using the mobile application that has been implemented. For each UGV that is uploaded to the COGNITUS platform, a processing, editing and delivering process workflow is engaged towards the creation of new professional content, as well as the enhancement of the video and its delivery back to the end-users, through a diversity of devices, such as set-top boxes (STB), tablets, or mobiles. The COGNITUS applications are significant for the project, since they constitute the basic interface between the users and the platform.


COGNITUS core components

The architecture of the COGNITUS platform is based on a service-oriented model, utilizing services (COGNITUS components) as fundamental elements. This approach requires the existence of fundamental functionality, which provides the necessary coordination, monitoring, conformance and Quality of Service (QoS) assurance of the system. Additionally, the heterogeneous nature (in terms of the runtime environment and programming language of implementation) of the components founding the platform raises the need for a decoupled architectural model towards a service-oriented management. Communication between the components should occur seamlessly and in an efficient way via a common communication channel. The coordination part of the COGNITUS architecture addresses such concerns by introducing a set of fundamental components that shape the command and control centre of the platform.

Data management

The data management sector of the architecture involves components responsible for hosting and managing the entire corpus of the COGNITUS content, such as multimedia items and additional metadata. Data management consists of two COGNITUS components: The Social media metadata repository, which hosts metadata originating from social platforms, and the COGNITUS repository, which hosts the multimedia items.

Content streaming

The content streaming part of the COGNITUS architecture refers to the components which are responsible for streaming high-quality UHD content that will be available to the platform. The content streaming adapts to the device capabilities and network bandwidth of the user.

Processing components

This part of the COGNITUS architecture includes the components that are responsible for processing content. Processing components are categorised according to their functionality to Content enhancement, Quality control, Content delivery and Semantic analysis.


Communication & Orchestration

The modular architecture of COGNITUS entails the distribution of functionalities among a number of services, each one having particular responsibilities. This results in an organisational context that is more agile, allows faster development and efficiency in performance. Following the service orchestration paradigm in the context of Service-oriented Architecture (SoA), an entity (in this case named the ‘coordinator’), is responsible for invoking and combining the services, just like a real-life orchestrator managing his orchestra.

Therefore, the COGNITUS coordinator is introduced as a core component responsible for coordinating the interoperation among the other services. In specific, the COGNITUS coordinator triggers asynchronously, on an event-based manner, the execution of the COGNITUS components, one by one or in parallel, in order to accomplish specific tasks towards completing a system-wide task all together. The communication between the coordinator and the service components is achieved by utilizing a message broker for the task dispatching. The event-based nature of the communication between the components enables fast distribution of tasks, maintaining at the same time high efficiency on the coordinator.

The message broker is responsible for creating and maintaining queues to support message delivery by implementing a queuing protocol, such as the Advanced Message Queuing Protocol (AMQP). Accordingly, a COGNITUS component can communicate asynchronously by sending and receiving messages through a queue with any other component just by implementing this message protocol. To this end, message exchanging becomes technology agnostic as the producer of a message is not required to have knowledge about the implementation details of the consumer.

In addition to this decoupling approach, the utilisation of queues supports the scalability, reliability and efficiency features of the platform. New instances of processing components can be deployed on demand and start accomplishing tasks, by simply registering to the respective queue. In case that an instance of a service worker stops working while trying to complete a given task, the queue is smart enough to revert the task back to the queue so as to be consumed by another service worker.

For COGNITUS RabbitMQ (, a mature, open-source solution offering client libraries for almost any programming language, is utilized to serve as the message broker. RabbitMQ implements the AMQP protocol, thus providing a common platform to services for their message-driven communication. Also, facilitates the work queues required for the task dispatching in the context of COGNITUS.


COGNITUS is being created as an exploitation-ready product that should be easily deployed in a production environment. Towards this, several architectural aspects have been considered:

Components isolation and self-containment

Since many COGNITUS components need to work simultaneously, performing critical processing to meet the goals of the platform, it is important to ensure that all these components work reliably without being disrupted by other components. On the functional architecture level, this has been addressed through the COGNITUS coordinator and the message broker (RabbitMQ) components, which distribute and dispatch the work to the relevant components’ instances that run isolated from the other components. However, isolation must be addressed on the deployment level too, in order to continue ensuring that a potential component’s runtime malfunction will not affect other components and that resources continue to be shared and managed efficiently.

In addition, COGNITUS is a platform consisting of multiple heterogeneous components undertaking small pieces of work towards fulfilling the goals of the project. The deployment of such a platform raises great challenges as each component requires specific environment and libraries to run. To eliminate such constraints, the design of the deployment architecture has focused on service isolation and self-containment. According to this approach, any component can be deployed using the environment and configuration that best fits its needs, without having any software dependency conflicts with other services.

To address such issues, the containerization approach has been recently introduced. According to this approach, a container consists of an entire runtime environment, an application, its dependencies (libraries and executables) and configuration files. Containers are different from virtual machines in terms of system resources sharing. Specifically, each container shares the operating system kernel with other containers. Shared parts of the operating system are read only, while each container has its own running space. Thus, containers require far less resources than virtual machines making them more lightweight and portable. Containers can be hosted in a balanced way on a physical machine, a virtual machine or on a cloud host.

Alternatives for software deployment with focus on isolation

Currently there are many solutions that automate handling of containers, turning the management and deployment of software into a trivial task. In the context of COGNITUS, Docker (, one of the most significant solutions, has been employed. Docker is an open-source project, providing tools to support developers to create software and share their development environment, as well as system administrators to deploy software.

In order to follow this approach, all the COGNITUS components specify their working environments, including the OS and the libraries/dependencies required at runtime, by creating a specification file called Dockerfile. The instructions/commands needed to run the application are specified to this file too.


As mentioned above, enabling the COGNITUS platform to support scalability has been partially addressed by the proposed modular functional architecture. More specifically, the system is designed to be stateless, allowing runtime additions/removals of the components’ instances in a plug and play manner. To achieve this at a physical level, an automation tool, which enables the horizontal scaling up of the containers forming the COGNITUS platform was also incorporated. In particular, COGNITUS utilises the Swarm mode of Docker, which offers command line tools that enable clustering and scheduling for Docker containers. That means, that the platform can be easily scaled up or down according to the current load.

High availability

Availability is a characteristic of service-oriented systems that describes the period of time during which a service is available. The term high-availability is used to express the importance of high operational performance for a given period of time. When creating robust production systems, minimizing service interruptions is often critical. Nevertheless, no matter how reliable the software is, problems can occur (e.g., hardware malfunctions) that can bring down the applications. To create a really reliable and highly available distributed system two important features should be taken into account: load balancing and redundancy.

Load balancing refers to the distribution of the network or application traffic across a number of similar servers. It is widely used for optimizing resources, maximizing throughput, reducing latency, and ensuring fault-tolerant configurations. In COGNITUS, load balancers are used between all the end-user applications and the internal components/servers offering public end-points. The balancers, acting as reverse proxies, are responsible for suitably routing the incoming requests to the deployed instances of those internal application servers that will fulfil these requests.

Redundancy refers to the multiplication of the running instances of a service (replicas) running at any given time. For the support of redundancy, a replication controller was incorporated in the architecture for starting-up or killing instances of COGNITUS components according to the current load. Accordingly, instances are automatically replaced if they fail, get deleted or are terminated. The replication controller is a feature provided by Docker Swarm mode, which is the container management solution utilised.

Continuous Integration (CI) & automated builds

The Docker images produced during the development of the COGNITUS components need to be updated each time new commits to the respective source code are made. Thus, the developers need to re-build the images each time they add new features to their components. Additionally, each time a new feature is committed, the component should be checked against build or runtime errors that might be introduced.

To this end, COGNITUS follows the Continuous Integration (CI) development practice, a process of automating the build and testing of code every time a team member commits changes to version control. In specific, COGNITUS utilises for this process the CI tools offered by Gitlab ( Using a specification file (.gitlab-ci.yml) developers can specify the automated steps (stages) that will be executed towards building the project, running the unit and integration tests, and finally deploying  the component.


COGNITUS is focused on content. Users upload videos while being at an event, the content is being enhanced by the COGNITUS components (UHD, high frame rate, HDR, etc.) and is finally delivered back to the users’ devices via adaptive streams. Accordingly, four main workflows cover the most significant procedures in COGNITUS: (i) UGC enhancement, (ii) Event creation, (iii) Discovery of related content and synchronisation, and (iv) Content delivery.

The UGC enhancement workflow includes the components activated in order to analyse and assess the quality of the original content and enhance it, following several video and audio quality improvement steps. In parallel, a number of components analyse the content to produce relevant semantic metadata based on visual aspects of the content or metadata that can be found on social media. The result of this processing is a high quality, all-intra HEVC encoded video at very high rates using the Turing codec, which will be used by professional producers in order to produce creative plots/clips based on the collected UGC.

UGC enhancement workflow

The event creation workflow includes a number of tasks responsible for semantically initializing an event. This process involves fetching relevant metadata from social platforms to be potentially correlated with new UGC arriving to the platform.

The discovery of related to an event content and the synchronization workflow provides facilitating tasks to the professional producers in order to create new plots/clips.

Finally, the content delivery workflow involves the tasks that produce the various versions of the content (high/low resolution, high/low frame rate, HDR/SDR, etc.), encoding (using the Turing codec) and the creation of the DASH streams that will enable the adaptive streaming of the content to the users’ devices.

Event creation, Discovery of related content & synchronisation and content delivery workflows