|
from constants import ( |
|
ABOUT_TAB_NAME, |
|
ASSAY_LIST, |
|
SUBMIT_TAB_NAME, |
|
TERMS_URL, |
|
FAQ_TAB_NAME, |
|
) |
|
|
|
ABOUT_INTRO = f""" |
|
## About this challenge |
|
|
|
### Register [here](https://datapoints.ginkgo.bio/ai-competitions/2025-abdev-competition) on the Ginkgo website before submitting |
|
|
|
#### What is antibody developability and why is it important? |
|
|
|
Antibodies have to be manufacturable, stable in high concentrations, and have low off-target effects. |
|
Properties such as these can often hinder the progression of an antibody to the clinic, and are collectively referred to as 'developability'. |
|
Here we invite the community to submit and develop better predictors, which will be tested out on a heldout private set to assess model generalization. |
|
|
|
#### 🏆 Prizes |
|
|
|
For each of the 5 properties in the competition, there is a prize for the model with the highest performance for that property on the private test set. |
|
There is also an 'open-source' prize for the best model trained on the GDPa1 dataset of monoclonal antibodies (reporting cross-validation results) and assessed on the private test set where authors provide all training code and data. |
|
For each of these 6 prizes, participants have the choice between **$10k in data generation credits** with [Ginkgo Datapoints](https://datapoints.ginkgo.bio/) or a **cash prize** with a value of $2000. |
|
|
|
See the "{FAQ_TAB_NAME}" tab above (you are currently on the "{ABOUT_TAB_NAME}" tab) or the [competition terms]({TERMS_URL}) for more details. |
|
""" |
|
|
|
ABOUT_TEXT = f""" |
|
|
|
#### How to participate? |
|
|
|
1. **Create a Hugging Face account** [here](https://huggingface.co/join) if you don't have one yet (this is used to track unique submissions and to access the GDPa1 dataset). |
|
2. **Register your team** on the [Competition Registration](https://datapoints.ginkgo.bio/ai-competitions/2025-abdev-competition) page. |
|
3. **Build a model** or validate it on the [GDPa1](https://huggingface.co/datasets/ginkgo-datapoints/GDPa1) dataset. |
|
4. **Complete the "Qualifying Exam"**. Before you can submit to the final test set, you must first get a score on the public leaderboard. Choose one of the two tracks: |
|
- Track 1 (Benchmark an existing model): Submit predictions for the `GDPa1` dataset. |
|
- Track 2 (Train from scratch): Train a model using cross-validation on the `GDPa1` dataset and submit cross-validation predictions by selecting `GDPa1_cross_validation`. |
|
5. **Submit to the "Final Exam"**. Once you have submitted predictions on the validation set, download the private test set sequences from the {SUBMIT_TAB_NAME} tab and submit your final predictions. Your performance on this private set will determine the winners. |
|
|
|
Submissions close on **1 November 2025**. |
|
|
|
#### Acknowledgements |
|
|
|
We gratefully acknowledge [Tamarind Bio](https://www.tamarind.bio/)'s help in running the following models: |
|
- TAP (Therapeutic Antibody Profiler) |
|
- SaProt |
|
- DeepViscosity |
|
- Aggrescan3D |
|
- AntiFold |
|
|
|
We're working on getting more public models added, so that participants have more precomputed features to use for modeling. |
|
|
|
#### How to contribute? |
|
|
|
We'd like to add some more existing developability models to the leaderboard. Some examples of models we'd like to add: |
|
- ESM embeddings + ridge regression |
|
- Absolute folding stability models (for Thermostability) |
|
- PROPERMAB |
|
- AbMelt (requires GROMACS for MD simulations) |
|
|
|
If you would like to collaborate with others, start a discussion on the "Community" tab at the top of this page. |
|
""" |
|
|
|
|
|
FAQS = { |
|
"Is there a fee to enter?": "No. Participation is free of charge.", |
|
"Who can participate?": "Anyone. We encourage academic labs, individuals, and especially industry teams who use developability models in production.", |
|
"Where can I find more information about the methods used to generate the data?": ( |
|
"Our [PROPHET-Ab preprint](https://www.biorxiv.org/content/10.1101/2025.05.01.651684v1) described in detail the methods used to generate the training dataset. " |
|
"Note: These assays may differ from previously published methods, and these correlations between literature data and experimental data are also described in the preprint. " |
|
"These same methods are used to generate the heldout test data." |
|
), |
|
"What do the datasets contain?": ( |
|
"Both the GDPa1 and heldout test set contain the VH and VL sequences, as well as the full heavy chain sequence. The GDPa1 dataset is a mix of IgG1, IgG2, and IgG4 antibodies while the heldout test set only contains IgG1 antibodies. We also include the light chain subtype (lambda or kappa)." |
|
), |
|
"How were the heldout sequences designed?": ( |
|
"We sampled 80 paired antibody sequences from [OAS](https://opig.stats.ox.ac.uk/webapps/oas/). We tried to represent the range of germline variants, sequence identities to germline, and CDR3 lengths. " |
|
"The sequences in the dataset are quite diverse as measured by pairwise sequence identity." |
|
), |
|
"Do I need to design new proteins?": ( |
|
"No. This is just a predictive competition, which will be judged according to the correlation between predictions and experimental values. There may be a generative round in the future." |
|
), |
|
"Can I participate anonymously?": ( |
|
"Yes! Please still create an anonymous Hugging Face account so that we can uniquely associate submissions and add an email on the [registration page](https://datapoints.ginkgo.bio/ai-competitions/2025-abdev-competition) so that we can contact participants throughout the competition." |
|
"Note that top participants will need to identify themselves at the end of the tournament to receive prizes / recognition. " |
|
"If there are any concerns about anonymity, please contact us at [email protected] - you can even send us a CSV of submissions from a burner email if necessary! 🥷" |
|
), |
|
"How is intellectual property handled?": ( |
|
f"Participants retain IP rights to the methods they use and develop during the tournament. Read more details in our terms [here]({TERMS_URL})." |
|
), |
|
"Do I need to submit my code / methods in order to participate?": ( |
|
"No, there are no requirements to submit code / methods and submitted predictions remain private. " |
|
"We also have an optional field for including a short model description. " |
|
"Top performing participants will be requested to identify themselves at the end of the tournament. " |
|
"There will be one prize for the best open-source model, which will require code / methods to be available." |
|
), |
|
"How exactly can I evaluate my model?": ( |
|
"You can easily calculate the Spearman correlation coefficient on the GDPa1 dataset yourself before uploading to the leaderboard. " |
|
"Simply use the `spearmanr(predictions, targets, nan_policy='omit')` function from `scipy.stats`. " |
|
"For the heldout private set, we will calculate these Spearman correlations privately at the end of the competition (and possibly at other points throughout the competition) - but there will not be 'rolling results' on the private test set to prevent test set leakage." |
|
), |
|
"How often does the leaderboard update?": ( |
|
"The leaderboard should reflect new submissions within a minute of submitting. Note that the leaderboard will not show the results on the private test set, these will be calculated once at the end of the tournament (and possibly at another occasion before that)." |
|
), |
|
"How many submissions can I make?": ( |
|
"You can currently make unlimited submissions, but we may choose to limit the number of possible submissions per user. For the private test set evaluation the latest submission will be used." |
|
), |
|
"How are winners determined?": ( |
|
'There will be 6 prizes (one for each of the assay properties plus an "open-source" prize). ' |
|
"For the property-specific prizes, winners will be determined by the submission with the highest Spearman rank correlation coefficient on the private holdout set. " |
|
'For the "open-source" prize, this will be determined by the highest average Spearman across all properties. ' |
|
"We reserve the right to award the open-source prize to a predictor with competitive results for a subset of properties (e.g. a top polyreactivity model)." |
|
), |
|
"How does the open-source prize work?": ( |
|
"Participants who open-source their code and methods will be eligible for the open-source prize (as well as the other prizes)." |
|
), |
|
"What do I need to submit?": ( |
|
'There is a tab on the Hugging Face competition page to upload predictions for datasets - for each dataset participants need to submit a CSV containing a column for each property they would like to predict (e.g. called "HIC"), ' |
|
"and a row with the sequence matching the sequence in the input file. These predictions are then evaluated in the backend using the Spearman rank correlation between predictions and experimental values, and these metrics are then added to the leaderboard. " |
|
"Predictions remain private and are not seen by other contestants." |
|
), |
|
"Can I submit predictions for only one property?": ( |
|
"Yes. You do not need to predict all 5 properties to participate. Each property has its own leaderboard and prize, so you may submit models for a subset of the assays if you wish." |
|
), |
|
"Can I switch between Track 1 and Track 2 during the competition?": ( |
|
"Yes. You may submit to both tracks. For example, you can benchmark an existing model on the GDPa1 dataset (Track 1) and later also train and submit a cross-validation model on GDPa1 (Track 2)." |
|
), |
|
"Are participants required to use the provided cross-validation splits?": ( |
|
"Yes, if submitting cross-validation results, to ensure fair comparison. The results will be calculated by taking the average Spearman correlation coefficient across all folds." |
|
), |
|
"Are there any country restrictions for prize eligibility?": ( |
|
"Yes. Due to applicable laws, prizes cannot be awarded to participants from countries under U.S. sanctions. See the competition terms for details." |
|
), |
|
"How are private test set submissions handled?": ( |
|
"We will use the private test set submission at the close of the competition to determine the winners. " |
|
"If there are any intermediate releases of private test set results, these will not affect the final ranking." |
|
), |
|
} |
|
|
|
SUBMIT_INTRUCTIONS = f""" |
|
# Antibody Developability Submission |
|
Upload a CSV to get a score! |
|
List of valid property names: `{', '.join(ASSAY_LIST)}`. |
|
|
|
You do **not** need to predict all 5 properties — each property has its own leaderboard and prize. |
|
|
|
## Instructions |
|
1. **Submit your predictions** as a CSV with `antibody_name` + one column per property you are predicting (e.g. `"antibody_name,Titer,PR_CHO"` if your model predicts Titer and Polyreactivity). |
|
2. **Final test submission**: Download test sequences from the example files below and upload predictions. |
|
|
|
The validation set results should appear on the leaderboard within a minute. The **private test set results will not appear on the leaderboards**, and will be used to determine the winners at the close of the competition. |
|
We may release private test set results at intermediate points during the competition. |
|
|
|
## Cross-validation |
|
|
|
For the cross-validation metrics (if training only on the GDPa1 dataset), use the `"hierarchical_cluster_IgG_isotype_stratified_fold"` column to split the dataset into folds and make predictions for each of the folds. |
|
Submit a CSV file in the same format but also containing the `"hierarchical_cluster_IgG_isotype_stratified_fold"` column. |
|
|
|
Submissions close on **1 November 2025**. |
|
""" |
|
|