Frequently Asked Questions

Participation Questions

Do I have to participate in all three subtasks?

No. You may choose to participate in one or more subtasks.

Do I have to work on all languages for a given subtask?

No. You can choose one or more languages.

Am I allowed to participate in more than one language simultaneously?

Yes. You may work on multiple languages at the same time.

When will all datasets for the 22 languages be released?

We will announce the release of each dataset as it becomes available. Currently, datasets for 9 languages have been released, and the remaining ones will follow soon.

How will you verify my submitted model?

To be included in the final team rankings, participants must submit a system description paper detailing their approach and methodology. This ensures transparency and scientific integrity.

Will I be included in the final ranking if I do not submit a system description paper?

No. A system description paper is mandatory for inclusion in the final ranking.

I have never written a system description paper. How can I write one?

We will provide an online tutorial and resources to guide you through the process.

Can we work as a team, or must everyone participate individually?

Each participant can be part of only one team. If you believe there are special circumstances that require you to join more than one team, please email us before the evaluation period begins.

You may work either as part of a team or independently — but not both.

I want to add someone to my team but can’t find how to do that on Codabench. How can I add members?

To add members to your team on Codabench:

  • Click your account name in the upper-right corner and select your organization.
  • On the organization page, click “Edit Organization” in the upper-left corner.
  • Scroll to the bottom and click the green “Invite Members” button to add teammates.

Data and Methodology

When will you release the gold labels?
  • Dev set: Released at the start of the evaluation phase.
  • Test set(s): Released after the competition ends.
Can I use large language models (LLMs) in the subtasks?

Yes.

Are there restrictions on the types or sizes of LLMs or on how we use them for data augmentation?

No. You may use any type or size of model — open-source or closed-source. You may also generate or use synthetic data for augmentation.

Can I use additional datasets (e.g., publicly available ones)?

Yes, but you must properly cite all external datasets in your system description paper.

How was the data collected?
  • Data sources include news websites, Reddit, blogs, Bluesky, and regional forums, covering topics such as elections, conflicts, gender rights, and migration.
  • Each language dataset contains approximately 3,000–5,000 annotated instances. Annotation platforms include Label Studio, Prolific, Potato, and Mechanical Turk.
How was the data annotated? Were LLMs used?

No. All data was annotated by at least three native speakers per instance — no LLMs were used.

I cannot find the link to download the files under the “Files” tab on Codabench. How can I access the datasets?

You can access all dataset download links through this document: Google Drive Dataset Access Guide .

What is the “starter pack” uploaded in the Files tab? Should we use it?

The starter pack is an example to help you get started with the data. You may use it as a reference or build your own approach from scratch.

I’ve downloaded the starter and public_data files. The public_data file includes multiple subtasks — what’s the difference, and do I need all of them?

The public data contain all subtasks, so you only need to download them once even if you plan to participate in multiple subtasks.

The starter pack includes a sample notebook that trains a small BERT model for demonstration. You are free to use your own code and models instead.


Publication and Conference

Do I need to pay conference registration fees or attend SemEval for my paper to be published?

No. Publication does not require attendance. If you do not attend the workshop, you do not need to pay any fees. If you wish to attend, standard registration fees apply.

Our system did not perform well. Should we still write a system description paper?

Yes. We strongly encourage all teams to submit a paper. Negative results are equally valuable and contribute to advancing research on polarization.


Technical Issues on Codabench

It’s been several hours since I submitted my file, but the status still shows “running.”

Codabench may occasionally experience delays in processing submissions. Please refresh the page after a few minutes. If the issue persists, try resubmitting your file.

My submission failed, but I’m not sure why. How can I check the error?

Click on your submission in the leaderboard or the “My Submissions” tab. You can view the error log under the “Output” section to see detailed feedback about the failure.

The platform shows “Internal Error.” What should I do?

This usually happens if your submission format doesn’t match the expected structure. Verify that your file follows the required format described on the evaluation page, then re-submit.

I uploaded the wrong file. Can I delete or replace my submission?

You cannot delete a submission, but you can simply re-submit a new file. Only the latest valid submission will be considered for the leaderboard during evaluation.

I cannot see the “Submit” button on Codabench. How can I upload my file?

Ensure you are logged in and registered for the competition. The “Submit” button appears under the corresponding subtask page once registration is complete.

My team members can’t see our submissions. Why?

Submissions are only visible within your team. Make sure all members are added under the same organization on Codabench using the “Invite Members” option.

© 2026 POLAR Shared Task. All rights reserved. Adapted from Editorial Template by HTML5 UP