Multimodal (Audio, Facial and Gesture) based Emotion Recognition Challenge
People express emotions through different modalities. Integration of verbal and non-verbal communication channels creates a system in which the message is easier to understand. Expanding the focus to several expression forms can facilitate research one motion recognition as well as human-machine interaction. In this competition, the authors present a Polish emotional database composed of three modalities: facial expressions, body movement and gestures, and speech. The corpora contains recordings registered in studio conditions, acted out by 16 professional actors (8 male and 8 female). The data is labeled with six basic emotions categories, according to Ekman’s emotion categories.
The participants will have to analyze all 3 modalities and, based on all 3 modalities, perform the emotion recognition. The participants must submit the code and all dependencies via codalab and the organizer will run the codes. The evaluation would be based on the average correct emotion recognition using each modalities as well as all 3 modalities together. In case of equal performance, the processing time will be used in order to indicate the ranking. The Training data will be provided followed by the validation dataset. The test data will be finally launched with no label and it will be used for the evaluation of participants.
List of organisers
- Dorota Kaminska and Tomasz Sapiński - Lodz University of Technology, Poland
- Kamal Nasrollahi - University of Aalborg, Denmark
- Hasan Demirel - Eastern Mediterranean University, Turkey
- Cagri Ozcinar - Trinity College Dublin, Ireland
- Gholamreza Anbarjafari - iCV Lab, University of Tartu, Estonia
SIMAH (SocIaL Media And Harassment): First workshop on categorizing different types of online harassment languages in social media
The proposed competition focusing of online harassment in Twitter in English. It has two related tasks: the first task is a binary classification to classify online harassment tweets versus not_harassment tweets, the second task is multi-class classification of online harassment tweets into three categories of “Indirect harassment”, “sexual harassment” and “physical harassment”.
List of organizers
- Sima Sharifirad - Department of computer science, Dalhousie University, Halifax, Canada.
- Stan Matwin - Department of computer science, Dalhousie University, Halifax, Canada.
Correcting Transiting Exoplanet Light Curves for Stellar Spots.
The field of exoplanet discovery and characterisation has been growing rapidly in the last decade. However, several big challenges remain, many of which could be addressed using machine learning and data mining methodology. For instance, the most successful method for detecting exoplanets, transit photometry –measuring the faint decrease in incoming stellar light as an exoplanet passes between the Earth and a target star– is very sensitive to the presence of stellar spots and faculae. The current approach is to identify the effects of spots visually and correct for them manually or discard the data. As a first step to automate this process, we propose a regular competition on data generated by ArielSim, the simulator of the European Space Agency’s upcoming Ariel mission, whose objective is to characterise the atmosphere of 1000 exoplanets. The data consist of pairs of light curves corrupted by stellar spots and the corresponding clean ones, along with auxiliary observation information. The goal is to correct the light curve for the presence of stellar spots (signal denoising). This is a yet unsolved problem in the community. Solving it will mean improving our understanding of the characteristics of currently confirmed exoplanets, potentially recognising false positive / false negative detections and improving our ability to analyse new observations – primarily but not limited to those expected from Ariel– without the need to equip new telescopes with additional instruments with all the extra costs this implies.
List of organizers
- Nikolaos Nikolaou - UCL, England
- Ingo P. Waldmann - UCL, England
- Subhajit Sarkar - University of Cardiff, Wales
- Angelos Tsiaras - UCL, England
- Billy Edwards - UCL, England
- Mario Morvan - UCL, England
- Kai Hou Yip - UCL, England
- Giovanna Tinetti - UCL, England
As part of the AutoDL challenges, the AutoCV2 challenge aims at finding fully automated solutions for classification tasks in computer vision. Compared to the recent AutoCV challenge, AutoCV2 challenge targets not only image classification tasks, but also video classification tasks. Participants need to make code submissions containing machine learning code that is trained and tested on the CodaLab platform, without human intervention whatsoever, with time and memory limitations. All problems are multi-label classification problems, coming from various domains. Raw data is provided, but formatted in a uniform manner, to encourage participants to submit generic algorithms.
List of organisers
- Sergio Escalera - U. of Barcelona / Computer Vision Center Barcelona, Spain
- Isabelle Guyon - ChaLearn, USA - Inria / Université Paris-Saclay, France
- Zhengying Liu - Inria / Université Paris-Saclay, France
- Wei-Wei Tu - 4Paradigm, China
The full list of organizers is:
AutoDL preparation team
- University Paris-Saclay:
- University Barcelona
- ChaLearn directors involved in the project:
- ChaLearn collaborators:
- Volunteers and interns: