CONSTRAINT 2022

Second Workshop on Combating Online Hostile Posts in Regional Languages during Emergency Situation

Collocated with ACL 2022

-- Special Theme --
Multimodal Low-Resource Language Processing to Combat COVID-19 Related Online Hostile Content


NEWS

  • Stay tuned for the news. Follow us on Twitter to stay in touch.

ABOUT THE WORKSHOP

The increasing accessibility of the Internet has dramatically changed the way we consume information. The ease of social media usage not only encourages individuals to freely express their opinion (freedom of speech) but also provides content polluters with ecosystems to spread hostile posts (hate speech, fake news, cyberbullying, propaganda, etc.). Such hostile activities are expected to increase manifold during emergencies such as the presidential election and COVID-19 pandemic spreading. Most of such hostile posts are written in regional languages, and therefore can easily evade online surveillance engines that are majority trained on the posts written in resource-rich languages such as English and Chinese. Therefore, regions such as Asia, Africa, South America, where low-resource regional languages are used for day-to-day communication, suffer due to the lack of tools, benchmark datasets and learning techniques. Other developing countries such as Italy, Spain, where the used languages (pseudo-low-resource) are not as equipped with sophisticated computational resources as English, might also be facing the same issues.
Following the success of the first edition of CONSTRAINT (collocated with AAAI-21), the second edition will encourage researchers from interdisciplinary domains working on multilingual social media analytics to think beyond the conventional way of combating online hostile posts. The workshop will broadly focus on three major points:

  1. Regional language: The offensive posts under inspection may be written in low-resource regional languages (e.g., Tamil, Urdu, Bangali, Polish, Czech, Lithuanian, etc.).
  2. Emergency situation: ​The proposed solutions should be able to tackle misinformation during emergency situations where due to the lack of enough historical data, learning models need to adopt additional intelligence to handle emerging and novel posts.
  3. Early detection: ​Since the effect of misinformation during emergency situations is highly detrimental for society (e.g., health-related misadvice during a pandemic may take human’s life), we encourage the solutions to be able to detect such hostile posts as early as possible after their appearance on social media.

Special Theme: We particularly encourage researchers to submit their papers (opinion, position, resource, tool, etc.) which may focus on the Multimodal low-resource language processing to combat COVID-19-related online hostile content. As the COVID-19 epidemic sweeps across the world, it’s been accommodated by a tsunami of misinformation. At a time, when reliable information is vital for public health, safety, etc., fake news about COVID-19 has been spreading even faster than the fact. Most of these hostile posts are written in regional languages, and therefore can easily evade online surveillance engines. Therefore, the special theme is timely and demands coordinated efforts from interdisciplinary areas to investigate the cause and effect of online infodemic.

CALL FOR PAPER


REGULAR PAPER SUBMISSION

  • Topics of Interests: We invite the submission of high-quality manuscripts reporting relevant research in the area of collecting, managing, mining, and understanding hostile data from social media platforms. Topics of interest include, but are not limited to:
    • Fake news detection in regional languages
    • Hate speech detection in regional languages
    • Evolution of fake news and hate speech
    • Analyzing user behavior for hostile post propagation
    • Real-world tool development for combating hostile posts
    • Psychological study of the spreaders of hostile posts
    • Hate speech normalization
    • Information extraction, ontology design and knowledge graph for combating hostile posts
    • Early detection for hostile posts
    • Design lightweight tools with less data for hostile post detection
    • Code-mixed and code-switched hostile post analysis
    • Open benchmark and dashboard related to regional hostile posts
    • Specific case studies and surveys related to hostile posts
    • Claim detection and verification related to misinformation
    • Fact-check worthiness of misinformation
    • Cross-region language analysis for hostile posts
    • Computational social science analysis for hostile posts
    • Network analysis for fake news spreading and evolution
    • Multimodal processing of hostile content.

  • Submission Instructions:
    • Regular papers: Long papers may consist of up to eight (8) pages of content, plus unlimited pages of references. Paper submissions must use the official ACL style templates, which are available as an Overleaf template and also downloadable directly (Latex and Word). Accepted papers will be published in ACL Workshop Proceedings.
    • All papers must be submitted via our EasyChair submission page. Regular papers will go through a double-blind peer-review process. Only manuscripts in PDF or Microsoft Word format will be accepted.

  • Important Dates:
    • December 20, 2021: First Call for Workshop Papers
    • Feb. 6, 2022: Second Call for Workshop Papers
    • Feb. 28, 2022: Workshop Paper Due Date
    • March 26, 2022: Notification of Acceptance
    • April 10, 2022: Camera-ready papers due
    • May 26-28, 2022: Workshop Dates

SHARED TASK

  • Task: Hero, Villain and Victim: Dissecting harmful memes for Semantic role labelling of entities
      Given a meme and an entity, determine the role of the entity in the meme: hero vs. villain vs. victim vs. other. The meme is to be analyzed from the perspective of the author of the meme.
    • Role labeling for memes: This task emphasizes detecting which entities are glorified, vilified or victimized, within a meme. Assuming the frame of reference as the meme author's perspective, the objective is to classify for a given pair of a meme and an entity, whether the entity is being referenced as Hero vs. Villain vs. Victim vs. Other, within that meme.
    For contest related details and dataset access visit: https://constraint-lcs2.github.io/

  • Important Dates:
    • January 6, 2022: Release of the training set
    • March 8, 2022: Release of the test set
    • March 12, 2022: Deadline for submitting final results
    • March 25, 2022: System paper submission deadline
    • April 5, 2022: Notification of acceptance
    • April 10, 2022: Camera-ready papers due

CONTACT US