AIDO.Protein-RAG-16B

AIDO.Protein-RAG-16B is a multimodal protein language model that integrates Multiple Sequence Alignment (MSA) and structural data, building upon the AIDO.Protein-16B foundation. The training process comprises three main stages:

  1. 2D RoPE encoding fine-tuning
  2. Initial training on 100 billion tokens from UniRef50/UniClust30 MSA data
  3. Subsequent training on 80 billion tokens from AlphaFold Database MSA and structural data

Model Architecture Details

AIDO.Protein-RAG-16B employs a transformer encoder-only architecture featuring sparse Mixture-of-Experts (MoE) layers that replace dense MLP layers in each transformer block. Utilizing single amino acid tokenization and optimized through masked language modeling (MLM), the model activates 2 experts per token via top-2 routing mechanisms.

An Overview of AIDO.Protein

More architecture details are shown below:

Model Arch Component Value
Num Attention Head 36
Num Hidden Layer 36
Hidden Size 2304
FFN Hidden Size 7680
Num MoE Layer per Block 8
Num MoE Layer per Token 2
Vocab Size 44
Context Length 2048

Pre-training of AIDO.Protein-RAG-16B

Here we briefly introduce the details of pre-training of AIDO.Protein-RAG-16B. Mainly divided into three stages: (1) 1D -> 2D RoPE encoding finetuning; (2) UniRef50/Uniclust30 MSA finetuning; (3) AlphaFold Database MSA & Structure tokens finetuning.

Data

UniRef50/Uniclust30 MSA dataset: We utilized sequences from UniRef50 as queries to search for homologous sequences in UniClust30, subsequently constructing multiple sequence alignments (MSAs). UniRef50 comprises a total of 53.6 million sequences. Using HHblits, we searched all sequences, identifying over 25 homologous sequences for 23.7 million of them. This dataset was directly used as the training set, referred to as HHblits_MSA. The remaining 29.9 million sequences were input into MSA Retriever, resulting in 7.7 million sequences with more than 25 homologous sequences. This dataset was designated as Retriever_MSA. During training, RAGPLM randomly sampled from the two datasets with probabilities of 0.75 and 0.25. Refer to AIDO.Protein-RAG-3B paper (link) for more information.

AlphaFold Database MSA & Structure dataset: We downloaded all structural data from the AlphaFold Database and kept only those where more than 40% of amino acids had a pLDDT score > 70. The remaining sequences were clustered using mmseqs (seq id=0.5), and one representative per cluster was retained, resulting in 46.9 million sequence/structure pairs. For each structure, we used genbio-ai/AIDO.StructureTokenizer to obtain structure tokens and embeddings. MSA Retriever was used to obtain the corresponding MSA.

Training Details

Model training is divided into three stages:

(1) 1D -> 2D RoPE Encoding Fine-tuning

Same training data as AIDO.Protein-16B, but with 2D rotary position embedding for token encoding.

(2) UniRef50/UniClust30 MSA Fine-tuning

The model from Stage 1 is further fine-tuned on the UniRef50/Uniclust30 MSA dataset. See the AIDO.Protein-RAG-3B paper for more.

(3) AlphaFold Database MSA & Structure Fine-tuning

We fine-tuned the model with concatenated query and homologous sequences. Structure embeddings (dim = 384) are linearly mapped to 2304 and added to the query token embeddings.

Sequence Masking
  • Randomly sample 0.05 ร— L span positions from a query of length L. Span lengths follow a geometric distribution (p=0.2), capped at length 10. On average, ~15% of query tokens are masked.

  • When a residue is selected, its aligned residues across all sequences (MSA column) are also masked.

  • For masked MSA columns: 80% are replaced with <MASK>, 10% with random amino acids, and 10% left unchanged.

Structure Masking
  • In 20% of cases, structure embeddings are replaced with 0.

  • In 80% of cases, a number of amino acids is sampled using the BetaLinear30 distribution and corresponding embeddings are zeroed. (BetaLinear30 = 20% Uniform(0,1) + 80% Beta(3,9)).

Positional Embedding

We use 2D rotary position embedding to help the model distinguish token chain identities and residue indices. See AIDO.Protein-RAG-3B paper (link) for more information.

Loss Function

Total loss is a weighted sum of sequence loss (weight 1.0) and structure loss (weight 0.01).

  • Sequence loss: CrossEntropy loss for masked token prediction.

  • Structure loss: CrossEntropy loss for masked structure token prediction.

Hyper-params (1) 1D -> 2D finetuning (2) UniRef50/Uniclust30 MSA finetuning (3) AFDB MSA & Structure tokens finetuning
Initialized parameters AIDO.Protein-16B Stage (1) Stage (2)
Data ColabFoldDB, UniRef HHblits_MSA, Retriever_MSA AFDB MSA & Structure tokens
Global Batch Size 512 256 256
Sequence length 2048 12800 12800
Per Device Micro Batch Size 1 1 1
Precision Mixed FP32-FP16 Mixed FP32-FP16 Mixed FP32-FP16
LR [5e-6,5e-5] [1e-6, 1e-5] 1e-5
Num Tokens 10 billion 100 billion 80 billion

Tokenization

We encode protein sequence with single amino acid resolution with 44 vocabularies, where 24 tokens represent amino acid types and 20 are special tokens. Sequences were also suffixed with a [SEP] token as hooks for downstream tasks.

Results

Supervised Downstream Tasks

supervised_tasks

Supervised DMS Fitness Score Prediction (25 Samples)

supervised_dms

How to Use

Build Downstream Models Using ModelGenerator

For more information, visit: Model Generator

mgen fit --model SequenceClassification --model.backbone aido_protein_rag_16b --data SequenceClassificationDataModule --data.path <hf_or_local_path_to_your_dataset>
mgen test --model SequenceClassification --model.backbone aido_protein_rag_16b --data SequenceClassificationDataModule --data.path <hf_or_local_path_to_your_dataset>

Use Directly in Python

Embedding

import torch
from modelgenerator.tasks import Embed
model = Embed.from_config({"model.backbone": "aido_protein_rag_16b"}).eval()
model.backbone.max_length = 12800
restypes = 'ARNDCQEGHILKMFPSTWYV'
data = {
    'sequences': [''.join(random.choice(restypes) for _ in range(50))],
    'msa': [ [ ''.join(random.choice(restypes+'-') for _ in range(50)) for _ in range(25) ] ],
    'str_emb': np.random.normal(size=(1, 50, 384))
}
transformed_batch = model.transform(data)
with torch.no_grad():
    embedding = model(transformed_batch)

print(embedding.shape)

Sequence Level Classification

import torch
from modelgenerator.tasks import SequenceClassification
model = SequenceClassification.from_config({"model.backbone": "aido_protein_rag_16b", "model.n_classes": 2}).eval()
model.backbone.max_length = 12800
restypes = 'ARNDCQEGHILKMFPSTWYV'
data = {
    'sequences': [''.join(random.choice(restypes) for _ in range(50))],
    'msa': [ [ ''.join(random.choice(restypes+'-') for _ in range(50)) for _ in range(25) ] ],
    'str_emb': np.random.normal(size=(1, 50, 384))
}
transformed_batch = model.transform(data)
with torch.no_grad():
    logits = model(transformed_batch)

print(logits)
print(torch.argmax(logits, dim=-1))

Token Level Classification

import torch
from modelgenerator.tasks import TokenClassification
model = TokenClassification.from_config({"model.backbone": "aido_protein_rag_16b", "model.n_classes": 3}).eval()
model.backbone.max_length = 12800
restypes = 'ARNDCQEGHILKMFPSTWYV'
data = {
    'sequences': [''.join(random.choice(restypes) for _ in range(50))],
    'msa': [ [ ''.join(random.choice(restypes+'-') for _ in range(50)) for _ in range(25) ] ],
    'str_emb': np.random.normal(size=(1, 50, 384))
}
transformed_batch = model.transform(data)
with torch.no_grad():
    logits = model(transformed_batch)

print(logits)
print(torch.argmax(logits, dim=-1))

Regression

from modelgenerator.tasks import SequenceRegression
model = SequenceRegression.from_config({"model.backbone": "aido_protein_rag_16b"}).eval()
model.backbone.max_length = 12800
restypes = 'ARNDCQEGHILKMFPSTWYV'
data = {
    'sequences': [''.join(random.choice(restypes) for _ in range(50))],
    'msa': [ [ ''.join(random.choice(restypes+'-') for _ in range(50)) for _ in range(25) ] ],
    'str_emb': np.random.normal(size=(1, 50, 384))
}
transformed_batch = model.transform(data)
with torch.no_grad():
    logits = model(transformed_batch)

print(logits.shape)

Citation

Please cite AIDO.Protein-RAG-16B using the following BibTex code:

@inproceedings{sun_mixture_2024,
    title = {Mixture of Experts Enable Efficient and Effective Protein Understanding and Design},
    url = {https://www.biorxiv.org/content/10.1101/2024.11.29.625425v1},
    doi = {10.1101/2024.11.29.625425},
    publisher = {bioRxiv},
    author = {Sun, Ning and Zou, Shuxian and Tao, Tianhua and Mahbub, Sazan and Li, Dian and Zhuang, Yonghao and Wang, Hongyi and Cheng, Xingyi and Song, Le and Xing, Eric P.},
    year = {2024},
    booktitle={NeurIPS 2024 Workshop on AI for New Drug Modalities},
}

@article {Li2024.12.02.626519,
    author = {Li, Pan and Cheng, Xingyi and Song, Le and Xing, Eric},
    title = {Retrieval Augmented Protein Language Models for Protein Structure Prediction},
    url = {https://www.biorxiv.org/content/10.1101/2024.12.02.626519v1},
    year = {2024},
    doi = {10.1101/2024.12.02.626519},
    publisher = {bioRxiv},
    booktitle={NeurIPS 2024 Workshop on Machine Learning in Structural Biology},
}
Downloads last month
97,958
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Collection including genbio-ai/AIDO.Protein-RAG-16B