ICMA version of Mistral-7B-Instruct-v0.2 for molecule captioning task (Mol2Cap) for paper "Large Language Models are In-Context Molecule Learners"

Notice: The input should contain 2 context examples and the cutoff length should be set to 2048 to ensure best performance.

Paper Link: https://arxiv.org/abs/2403.04197

Downloads last month
4
Safetensors
Model size
7.24B params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for phenixace/ICMA-Mistral-7B-Instruct-v0.2-M2C

Quantizations
1 model