CONSTRAINT 2022

May 27, 2022
Second Workshop on Combating Online Hostile Posts in Regional Languages during Emergency Situation

Collocated with ACL 2022

-- Special Theme --
Multimodal Low-Resource Language Processing to Combat COVID-19 Related Online Hostile Content


NEWS

  • Dr. Andreas Vlachos will be our third keynote speaker!
  • Accepted papers are announced. Click here to check the list of accepted papers.
  • Dr. Smaranda Muresan and Dr. Isabelle Augenstein will be our keynote speaker!
  • Paper submission deadline extended. New deadline: March 8, 2022.
  • Stay tuned for the news. Follow us on Twitter to stay in touch.

ABOUT THE WORKSHOP

The increasing accessibility of the Internet has dramatically changed the way we consume information. The ease of social media usage not only encourages individuals to freely express their opinion (freedom of speech) but also provides content polluters with ecosystems to spread hostile posts (hate speech, fake news, cyberbullying, propaganda, etc.). Such hostile activities are expected to increase manifold during emergencies such as the presidential election and COVID-19 pandemic spreading. Most of such hostile posts are written in regional languages, and therefore can easily evade online surveillance engines that are majority trained on the posts written in resource-rich languages such as English and Chinese. Therefore, regions such as Asia, Africa, South America, where low-resource regional languages are used for day-to-day communication, suffer due to the lack of tools, benchmark datasets and learning techniques. Other developing countries such as Italy, Spain, where the used languages (pseudo-low-resource) are not as equipped with sophisticated computational resources as English, might also be facing the same issues.
Following the success of the first edition of CONSTRAINT (collocated with AAAI-21), the second edition will encourage researchers from interdisciplinary domains working on multilingual social media analytics to think beyond the conventional way of combating online hostile posts. The workshop will broadly focus on three major points:

  1. Regional language: The offensive posts under inspection may be written in low-resource regional languages (e.g., Tamil, Urdu, Bangali, Polish, Czech, Lithuanian, etc.).
  2. Emergency situation: ​The proposed solutions should be able to tackle misinformation during emergency situations where due to the lack of enough historical data, learning models need to adopt additional intelligence to handle emerging and novel posts.
  3. Early detection: ​Since the effect of misinformation during emergency situations is highly detrimental for society (e.g., health-related misadvice during a pandemic may take human’s life), we encourage the solutions to be able to detect such hostile posts as early as possible after their appearance on social media.

Special Theme: We particularly encourage researchers to submit their papers (opinion, position, resource, tool, etc.) which may focus on the Multimodal low-resource language processing to combat COVID-19-related online hostile content. As the COVID-19 epidemic sweeps across the world, it’s been accommodated by a tsunami of misinformation. At a time, when reliable information is vital for public health, safety, etc., fake news about COVID-19 has been spreading even faster than the fact. Most of these hostile posts are written in regional languages, and therefore can easily evade online surveillance engines. Therefore, the special theme is timely and demands coordinated efforts from interdisciplinary areas to investigate the cause and effect of online infodemic.

CALL FOR PAPER


REGULAR PAPER SUBMISSION

  • Topics of Interests: We invite the submission of high-quality manuscripts reporting relevant research in the area of collecting, managing, mining, and understanding hostile data from social media platforms. Topics of interest include, but are not limited to:
    • Fake news detection in regional languages
    • Hate speech detection in regional languages
    • Evolution of fake news and hate speech
    • Analyzing user behavior for hostile post propagation
    • Real-world tool development for combating hostile posts
    • Psychological study of the spreaders of hostile posts
    • Hate speech normalization
    • Information extraction, ontology design and knowledge graph for combating hostile posts
    • Early detection for hostile posts
    • Design lightweight tools with less data for hostile post detection
    • Code-mixed and code-switched hostile post analysis
    • Open benchmark and dashboard related to regional hostile posts
    • Specific case studies and surveys related to hostile posts
    • Claim detection and verification related to misinformation
    • Fact-check worthiness of misinformation
    • Cross-region language analysis for hostile posts
    • Computational social science analysis for hostile posts
    • Network analysis for fake news spreading and evolution
    • Multimodal processing of hostile content.

  • Submission Instructions:
    • Regular papers: Long papers may consist of up to eight (8) pages of content, plus unlimited pages of references. Paper submissions must use the official ACL style templates, which are available as an Overleaf template and also downloadable directly (Latex and Word). Accepted papers will be published in ACL Workshop Proceedings.
    • All papers must be submitted via our EasyChair submission page. Regular papers will go through a double-blind peer-review process. Only manuscripts in PDF or Microsoft Word format will be accepted.
    • Shared-task papers: Submission will go through a single blind peer-review process. All the submissions in the shared task must contain author and affilitation details in the paper submission. Only manuscripts in PDF or Microsoft Word format will be accepted.

  • Important Dates:
    • December 20, 2021: First Call for Workshop Papers
    • Feb. 6, 2022: Second Call for Workshop Papers
    • Feb. 28, 2022: Workshop Paper Due Date
    • March 8, 2022: Extended Workshop Paper Due Date
    • March 26, 2022: Notification of Acceptance
    • April 10, 2022: Camera-ready papers due
    • May 27, 2022: Workshop Date

SHARED TASK

  • Task: Hero, Villain and Victim: Dissecting harmful memes for Semantic role labelling of entities
      Given a meme and an entity, determine the role of the entity in the meme: hero vs. villain vs. victim vs. other. The meme is to be analyzed from the perspective of the author of the meme.
    • Role labeling for memes: This task emphasizes detecting which entities are glorified, vilified or victimized, within a meme. Assuming the frame of reference as the meme author's perspective, the objective is to classify for a given pair of a meme and an entity, whether the entity is being referenced as Hero vs. Villain vs. Victim vs. Other, within that meme.
    For contest related details and dataset access visit: https://constraint-lcs2.github.io/

  • Important Dates:
    • January 6, 2022: Release of the training set
    • March 8, 2022: Release of the test set
    • March 12, 2022: Deadline for submitting final results
    • March 25, 2022: System paper submission deadline
    • April 5, 2022: Notification of acceptance
    • April 10, 2022: Camera-ready papers due

Accepted Papers

Title Authors
M-BAD: A Multilabel Dataset for Detecting Aggressive Texts and Their Targets Omar Sharif, Eftekhar Hossain and Mohammed Moshiul Hoque
How does fake news use a thumbnail? CLIP-based Multimodal Detection on the Unrepresentative News Image Hyewon Choi, Yejun Yoon, Seunghyun Yoon and Kunwoo Park
Detecting False Claims in Low-Resource Regions: A Case Study of Caribbean Islands Jason Lucas, Limeng Cui, Thai Le and Dongwon Lee
DD-TIG at Constraint@ACL2022: Multimodal Understanding and Reasoning for Role Labeling of Entities in Hateful Memes Ziming Zhou, Han Zhao, Jingjing Dong, Jun Gao and Xiaolong Liu
Are you a hero or a villain? A semantic role labelling approach for detecting harmful memes. Shaik Fharook, Syed Sufyan Ahmed, Gurram Rithika, Sumith Sai Budde, Sunil Saumya and Shankar Biradar
Logically at the Constraint 2022: Multimodal role labelling Ludovic Kun, Jayesh Bankoti and David Kiskovski
Combining Language Models and Linguistic Information to Label Entities in Memes Pranaydeep Singh, Aaron Maladry and Els Lefever
Detecting the Role of an Entity in Harmful Memes: Techniques and their Limitations Rabindra Nath Nandi, Firoj Alam and Preslav Nakov
Fine-tuning and Sampling Strategies for Multimodal Role Labeling of Entities under Class Imbalance Syrielle Montariol, Étienne Simon, Arij Riabi and Djamé Seddah
Findings of the CONSTRAINT 2022 Shared Task on Detecting the Hero, the Villain, and the Victim in Memes Shivam Sharma, Tharun Suresh, Atharva Kulkarni, Himanshi Mathur, Preslav Nakov, Md. Shad Akhtar and Tanmoy Chakraborty
Document Retrieval and Claim Verification to Mitigate COVID-19 Misinformation Megha Sundriyal, Ganeshan Malhotra, Md Shad Akhtar, Shubhashis Sengupta, Andrew Fano and Tanmoy Chakraborty

Schedule

Time Title Description
09:00 - 09:10 Opening -
09:10 - 10:10 Keynote 1: Isabelle Augenstein Automatically Detecting Scientific Misinformation
10:10 - 10:30 Regular paper 1 M-BAD: A Multilabel Dataset for Detecting Aggressive Texts and Their Targets
10:30 - 11:00 Coffee break -
11:00 - 12:00 Keynote 2: Andreas Vlachos Fact-Checking Using Structured and Unstructured Information
12:00 - 12:20 Regular paper 2 How does fake news use a thumbnail? CLIP-based Multimodal Detection on the Unrepresentative News Image
12:20 - 12:40 Regular paper 3 Detecting False Claims in Low-Resource Regions: A Case Study of Caribbean Islands
12:40 - 13:00 Regular paper 4 Document Retrieval and Claim Verification to Mitigate COVID-19 Misinformation
13:00-14:00 Lunch break -
14:00 - 15:00 Keynote 3: Smaranda Muresan The Role of Text Generation in Fighting Hostile Posts
15:00 - 15:30 Coffee break -
15:30 - 15:50 Shared task overview Findings of the CONSTRAINT 2022 Shared Task on Detecting the Hero, the Villain, and the Victim in Memes
15:50 - 16:00 Shared task paper 1 DD-TIG at Constraint@ACL2022: Multimodal Understanding and Reasoning for Role Labeling of Entities in Hateful Memes
16:00 - 16:10 Shared task paper 2 Are you a hero or a villain? A semantic role labelling approach for detecting harmful memes.
16:10 - 16:20 Shared task paper 3 Logically at the Constraint 2022: Multimodal role labelling
16:20 - 16:30 Shared task paper 4 Combining Language Models and Linguistic Information to Label Entities in Memes
16:30 - 16:40 Shared task paper 5 Detecting the Role of an Entity in Harmful Memes: Techniques and their Limitations
16:40-16:50 Shared task paper 6 Fine-tuning and Sampling Strategies for Multimodal Role Labeling of Entities under Class Imbalance
16:50 - 17:15 Closing -

CONTACT US