COGNITUS signal processing research event at IEEE MMSP 2017
COGNITUS is collaborating with the InVid project funded by the European Commission under the Horizon 2020 programme, and the CrowdMic project funded by US National Science Foundation, to organise two special sessions at the IEEE Multimedia and Signal Processing (MMSP) workshop, which also includes keynote talks on related topics. The event will take place in Luton (London, UK) on 16 – 18 October 2017. More information on the workshop are available in the official website.
COGNITUS related activities at MMSP 2017 consist of a series of events, including two special sessions:
and MMSP keynote talks related to COGNITUS topics:
- “Hybrid Log-Gamma and High Dynamic Range Television”, Katy Noland (BBC Research and Development Department, London, UK)
- “Crowdsourcing recordings: Challenges and new research problems”, Paris Smaragdis (University of Illinois at Urbana-Champaign, USA)
Authors are encouraged to submit papers to special sessions directly from the MMSP 2017 website (more information on the special sessions including description of the covered topics is available below). During the submission process, contributing authors should select the option for one of the special sessions. Call for papers is open until June 8, 2017.
The authors are also requested, where applicable, to include test results based on two open access data sets released by the COGNITUS project:
1. Enabling convergence of new forms of media:
With wide availability of consumer products that provide high quality visual experiences at home as well as content capture from mobile devices, many vibrant examples of fascinating new applications and services can be unleashed. Such products enable new forms of media such as User Generated Content (UGC) to reach not only social media services, but also broadcast programmes, which are typically characterised by high content quality.
This challenging media landscape is underpinned by numerous video processing technologies that support efficient media production and delivery. The aim of this special session is to present some of the most recent findings from the research community, to tackle the new exciting challenges required to enable this next generation of multimedia services. Topics that will be covered include but are not limited to: encoding and delivery of UGC and legacy content (fast and efficient video compression of UGC or legacy material, adaptive streaming at low-to-high bitrates, etc.); processing and enhancement of UGC and legacy content (upsampling, super-resolution, image denoising, dynamic range adaptation, etc.); assessment and analysis of UGC and legacy content.
- Saverio Blasi (BBC Research and Development Department, UK)
- Farzad Toutounchi (Queen Mary University of London, UK)
- Vasileios Mezaris (Centre for Research and Technology Hellas – Information Technologies Institute, Greece)
- Marta Mrak (BBC Research and Development Department, UK)
2. Recent trends in multi-microphone signal processing
Multi-microphone signal processing has received significant attention for decades in the audio signal processing community, for performing tasks such as speech enhancement, sound source separation, localization, and audio scene analysis. Applications such as robust speech recognition and voice interfaces, hearing aids, voice communication, and spatial audio, typically rely heavily on multi-microphone techniques. The recent success of voice interfaces in the Internet of Things (IoT) context, as well as the wide use of portable devices (smartphones, tablets) for recording events, call for focusing on this field from new perspectives, as well as revisiting conventional methods.
In this special session, the focus is on UGC, as well as on new developments on well-established research areas in multi-microphone signal processing. We invite submissions in the area of UGC, encouraging innovative research methods in the area of automatic organization of UGC, with focus on methodologies to automatically discover, match, group and synchronize overlapping audio (or video) files of the same event, based on the audio content but also with possible exploitation of relevant metadata such as time-stamps and GPS data.
Additionally, we invite complete or visionary papers in the area of collaborative processing of UGC audio recordings, where, given a collection of overlapping UG audio streams, the articles propose strategies for combining the available content with the goal to improve the overall acoustic representation of the captured event.
- Nikolaos Stefanakis (Foundation for Research and Technology Crete, Greece)
- Athanasios Mouchtaris (Foundation for Research and Technology Crete, Greece, and University of Crete, Greece)
- Paris Smaragdis (University of Illinois at Urbana-Champaign, USA)
For more information on this research event, contact Marta Mrak.