In today’s digitally connected world, online polarization is rapidly rising—splitting communities, amplifying misinformation, and worsening conflicts. Social media and online discourse are filled with increasingly hostile language targeting political views, religions, ethnic groups, and even entire nations.
While NLP has made impressive strides in sentiment analysis and hate speech detection, polarization remains understudied, especially across languages, cultures, and real-world events.
This task sets out to develop and evaluate NLP systems that can:
By building robust, explainable models of online polarization, we hope to:
We believe NLP should not only understand what people say, but also why they say it and how it impacts society. This task contributes to that goal by creating:
Every team that participates helps push the boundaries of multilingual, socially-aware AI. Together, we can build systems that do more than classify—they can help explain, caution, and connect.
Join us in making NLP not just more powerful, but also more responsible.