The increasing accessibility of the Internet has dramatically changed the way we consume information. The ease of social media usage not only encourages individuals to freely express their opinion (freedom of speech) but also provides content polluters with ecosystems to spread hostile posts (hate speech, fake news, cyberbullying, propaganda, etc.). Such hostile activities are expected to increase manifold during emergencies such as the presidential election and COVID-19 pandemic spreading. Most of such hostile posts are written in regional languages, and therefore can easily evade online surveillance engines that are majority trained on the posts written in resource-rich languages such as English and Chinese. Therefore, regions such as Asia, Africa, South America, where low-resource regional languages are used for day-to-day communication, suffer due to the lack of tools, benchmark datasets and learning techniques. Other developing countries such as Italy, Spain, where the used languages (pseudo-low-resource) are not as equipped with sophisticated computational resources as English, might also be facing the same issues.
Following the success of the first edition of CONSTRAINT (collocated with AAAI-21), the second edition will encourage researchers from interdisciplinary domains working on multilingual social media analytics to think beyond the conventional way of combating online hostile posts. The workshop will broadly focus on three major points:
- Regional language: The offensive posts under inspection may be written in low-resource regional languages (e.g., Tamil, Urdu, Bangali, Polish, Czech, Lithuanian, etc.).
- Emergency situation: The proposed solutions should be able to tackle misinformation during emergency situations where due to the lack of enough historical data, learning models need to adopt additional intelligence to handle emerging and novel posts.
- Early detection: Since the effect of misinformation during emergency situations is highly detrimental for society (e.g., health-related misadvice during a pandemic may take human’s life), we encourage the solutions to be able to detect such hostile posts as early as possible after their appearance on social media.
We particularly encourage researchers to submit their papers (opinion, position, resource, tool, etc.) which may focus on the Multimodal low-resource language processing to combat COVID-19-related online hostile content
. As the COVID-19 epidemic sweeps across the world, it’s been accommodated by a tsunami of misinformation. At a time, when reliable information is vital for public health, safety, etc., fake news about COVID-19 has been spreading even faster than the fact. Most of these hostile posts are written in regional languages, and therefore can easily evade online surveillance engines. Therefore, the special theme is timely and demands coordinated efforts from interdisciplinary areas to investigate the cause and effect of online infodemic.