Model Card for Mistral-DNA-v1-138M-yeast (mistral for DNA)

The Mistral-DNA-v1-138M-yeast Large Language Model (LLM) is a pretrained generative DNA text model with 17.31M parameters x 8 experts = 138.5M parameters. It is derived from Mistral-7B-v0.1 model, which was simplified for DNA: the number of layers and the hidden size were reduced. The model was pretrained using around 1000 yeast genomes with 10kb DNA sequences.

The yeast genomes are from: https://www.nature.com/articles/s41586-018-0030-5

For full details of this model please read our github repo.

Model Architecture

Like Mistral-7B-v0.1, it is a transformer model, with the following architecture choices:

  • Grouped-Query Attention
  • Sliding-Window Attention
  • Byte-fallback BPE tokenizer

Load the model from huggingface:

import torch
from transformers import AutoTokenizer, AutoModel

tokenizer = AutoTokenizer.from_pretrained("RaphaelMourad/Mistral-DNA-v1-138M-yeast", trust_remote_code=True) # Same as DNABERT2
model = AutoModel.from_pretrained("RaphaelMourad/Mistral-DNA-v1-138M-yeast", trust_remote_code=True)

Calculate the embedding of a DNA sequence

dna = "TGATGATTGGCGCGGCTAGGATCGGCT"
inputs = tokenizer(dna, return_tensors = 'pt')["input_ids"]
hidden_states = model(inputs)[0] # [1, sequence_length, 256]

# embedding with max pooling
embedding_max = torch.max(hidden_states[0], dim=0)[0]
print(embedding_max.shape) # expect to be 256

Troubleshooting

Ensure you are utilizing a stable version of Transformers, 4.34.0 or newer.

Notice

Mistral-DNA-v1-138M-yeast is a pretrained base model for DNA.

Contact

Raphaël Mourad. [email protected]

Downloads last month
363
Safetensors
Model size
138M params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.