Liquid AI
Try LFM β€’ Documentation β€’ LEAP

LFM2-2.6B-Exp

LFM2-2.6B-Exp is an experimental checkpoint built on LFM2-2.6B using pure reinforcement learning.

Specifically trained on instruction following, knowledge, and math, it delivers particularly strong performance compared to other 3B models. In particular, its IFBench score surpasses DeepSeek R1-0528, a model 263 times larger.

LFM2.6B-Exp-White_v1

Find more information about LFM2 in our blog post.

πŸ“„ Model details

Due to their small size, we recommend fine-tuning LFM2 models on narrow use cases to maximize performance. They are particularly suited for agentic tasks, data extraction, RAG, creative writing, and multi-turn conversations. However, we do not recommend using them for tasks that are knowledge-intensive or require programming skills.

Property LFM2-350M LFM2-700M LFM2-1.2B LFM2-2.6B
Parameters 354,483,968 742,489,344 1,170,340,608 2,569,272,320
Layers 16 (10 conv + 6 attn) 16 (10 conv + 6 attn) 16 (10 conv + 6 attn) 30 (22 conv + 8 attn)
Context length 32,768 tokens 32,768 tokens 32,768 tokens 32,768 tokens
Vocabulary size 65,536 65,536 65,536 65,536
Precision bfloat16 bfloat16 bfloat16 bfloat16
Training budget 10 trillion tokens 10 trillion tokens 10 trillion tokens 10 trillion tokens
License LFM Open License v1.0 LFM Open License v1.0 LFM Open License v1.0 LFM Open License v1.0

Supported languages: English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish.

Generation parameters: We recommend the following parameters:

  • temperature=0.3
  • min_p=0.15
  • repetition_penalty=1.05

Chat template: LFM2 uses a ChatML-like chat template as follows:

<|startoftext|><|im_start|>system
You are a helpful assistant trained by Liquid AI.<|im_end|>
<|im_start|>user
What is C. elegans?<|im_end|>
<|im_start|>assistant
It's a tiny nematode that lives in temperate soil environments.<|im_end|>

You can automatically apply it using the dedicated .apply_chat_template() function from Hugging Face transformers.

Tool use: It consists of four main steps:

  1. Function definition: LFM2 takes JSON function definitions as input (JSON objects between <|tool_list_start|> and <|tool_list_end|> special tokens), usually in the system prompt
  2. Function call: LFM2 writes Pythonic function calls (a Python list between <|tool_call_start|> and <|tool_call_end|> special tokens), as the assistant answer.
  3. Function execution: The function call is executed and the result is returned (string between <|tool_response_start|> and <|tool_response_end|> special tokens), as a "tool" role.
  4. Final answer: LFM2 interprets the outcome of the function call to address the original user prompt in plain text.

Here is a simple example of a conversation using tool use:

<|startoftext|><|im_start|>system
List of tools: <|tool_list_start|>[{"name": "get_candidate_status", "description": "Retrieves the current status of a candidate in the recruitment process", "parameters": {"type": "object", "properties": {"candidate_id": {"type": "string", "description": "Unique identifier for the candidate"}}, "required": ["candidate_id"]}}]<|tool_list_end|><|im_end|>
<|im_start|>user
What is the current status of candidate ID 12345?<|im_end|>
<|im_start|>assistant
<|tool_call_start|>[get_candidate_status(candidate_id="12345")]<|tool_call_end|>Checking the current status of candidate ID 12345.<|im_end|>
<|im_start|>tool
<|tool_response_start|>[{"candidate_id": "12345", "status": "Interview Scheduled", "position": "Clinical Research Associate", "date": "2023-11-20"}]<|tool_response_end|><|im_end|>
<|im_start|>assistant
The candidate with ID 12345 is currently in the "Interview Scheduled" stage for the position of Clinical Research Associate, with an interview date set for 2023-11-20.<|im_end|>

You can directly pass tools as JSON schema or Python functions with .apply_chat_template() as shown in this page to automatically format the system prompt.

Architecture: Hybrid model with multiplicative gates and short convolutions: 10 double-gated short-range LIV convolution blocks and 6 grouped query attention (GQA) blocks.

Pre-training mixture: Approximately 75% English, 20% multilingual, and 5% code data sourced from the web and licensed materials.

Training approach:

  • Very large-scale SFT on 50% downstream tasks, 50% general domains
  • Custom DPO with length normalization and semi-online datasets
  • Iterative model merging
  • Reinforcement learning with verifiable rewards

πŸƒ How to run LFM2

1. ONNXRuntime

from transformers import AutoConfig, AutoTokenizer
import onnxruntime
import numpy as np
from huggingface_hub import hf_hub_download

# 1. Load config, processor, and model
model_id = "onnx-community/LFM2-2.6B-Exp-ONNX"
config = AutoConfig.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
filename = "model_q4.onnx" # Options: "model.onnx", "model_fp16.onnx", "model_q4.onnx", "model_q4f16.onnx"
model_path = hf_hub_download(repo_id=model_id, filename=f"onnx/{filename}") # Download the graph
hf_hub_download(repo_id=model_id, filename=f"onnx/{filename}_data") # Download the weights
session = onnxruntime.InferenceSession(model_path)

## Set config values
num_key_value_heads = config.num_key_value_heads
head_dim = config.hidden_size // config.num_attention_heads
num_hidden_layers = config.num_hidden_layers
eos_token_id = config.eos_token_id
hidden_size = config.hidden_size
conv_L_cache = config.conv_L_cache
layer_types = config.layer_types

# 2. Prepare inputs
prompt = "What is C. elegans?"
messages = [{"role": "user", "content": prompt}]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="np")
input_ids = inputs['input_ids']
attention_mask = inputs['attention_mask']
batch_size = input_ids.shape[0]
past_cache_values = {}
for i in range(num_hidden_layers):
  if layer_types[i] == 'full_attention':
    for kv in ('key', 'value'):
      past_cache_values[f'past_key_values.{i}.{kv}'] = np.zeros([batch_size, num_key_value_heads, 0, head_dim], dtype=np.float32)
  elif layer_types[i] == 'conv':
    past_cache_values[f'past_conv.{i}'] = np.zeros([batch_size, hidden_size, conv_L_cache], dtype=np.float32)
  else:
    raise ValueError(f"Unsupported layer type: {layer_types[i]}")

# 3. Generation loop
max_new_tokens = 1024
generated_tokens = np.array([[]], dtype=np.int64)
for i in range(max_new_tokens):
  logits, *present_cache_values = session.run(None, dict(
      input_ids=input_ids,
      attention_mask=attention_mask,
      **past_cache_values,
  ))

  ## Update values for next generation loop
  input_ids = logits[:, -1].argmax(-1, keepdims=True)
  attention_mask = np.concatenate([attention_mask, np.ones_like(input_ids, dtype=np.int64)], axis=-1)
  for j, key in enumerate(past_cache_values):
    past_cache_values[key] = present_cache_values[j]
  generated_tokens = np.concatenate([generated_tokens, input_ids], axis=-1)
  if (input_ids == eos_token_id).all():
    break

  ## (Optional) Streaming
  print(tokenizer.decode(input_ids[0]), end='', flush=True)
print()

# 4. Output result
print(tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[0])

πŸ”§ How to fine-tune LFM2

We recommend fine-tuning LFM2 models on your use cases to maximize performance.

Notebook Description Link
SFT (Unsloth) Supervised Fine-Tuning (SFT) notebook with a LoRA adapter using Unsloth. Colab link
SFT (TRL) Supervised Fine-Tuning (SFT) notebook with a LoRA adapter using TRL. Colab link
DPO (TRL) Preference alignment with Direct Preference Optimization (DPO) using TRL. Colab link

πŸ“¬ Contact

If you are interested in custom solutions with edge deployment, please contact our sales team.

Citation

@article{liquidai2025lfm2,
 title={LFM2 Technical Report},
 author={Liquid AI},
 journal={arXiv preprint arXiv:2511.23404},
 year={2025}
}
Downloads last month
47
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for onnx-community/LFM2-2.6B-Exp-ONNX

Quantized
(14)
this model