SmolSynformer: SmolLM2 as Syntax-aware transformer

This is a adapter version with slightly less capabilities than the full SFT smolSynformer model.

Inference with transformer

from transformers import AutoTokenizer, AutoModelForCausalLM, AutoConfig, pipeline

test_model = "Bachstelze/smolSynformerPeft"
model = AutoModelForCausalLM.from_pretrained(test_model)
tokenizer = AutoTokenizer.from_pretrained(test_model)
config = AutoConfig.from_pretrained(test_model)

prompt_pipeline = pipeline("text-generation", model=test_model, tokenizer=tokenizer, max_new_tokens=250)
print(prompt_pipeline("Why is syntax relevant for language modeling and instruction following?\n"))

Example answer:

Syntax is relevant for language modeling and instruction following because it is the language that is being modeled. The language is the language that is being taught. The language is the language that is being used in the instruction. Therefore, the language that is being taught is the language that is being modeled.

Downloads last month
0
Safetensors
Model size
135M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Bachstelze/smolSynformerPeft

Adapter
(6)
this model

Datasets used to train Bachstelze/smolSynformerPeft