SmolSynformer: SmolLM2 as Syntax-aware transformer
This is a adapter version with slightly less capabilities than the full SFT smolSynformer model.
Inference with transformer
from transformers import AutoTokenizer, AutoModelForCausalLM, AutoConfig, pipeline
test_model = "Bachstelze/smolSynformerPeft"
model = AutoModelForCausalLM.from_pretrained(test_model)
tokenizer = AutoTokenizer.from_pretrained(test_model)
config = AutoConfig.from_pretrained(test_model)
prompt_pipeline = pipeline("text-generation", model=test_model, tokenizer=tokenizer, max_new_tokens=250)
print(prompt_pipeline("Why is syntax relevant for language modeling and instruction following?\n"))
Example answer:
Syntax is relevant for language modeling and instruction following because it is the language that is being modeled. The language is the language that is being taught. The language is the language that is being used in the instruction. Therefore, the language that is being taught is the language that is being modeled.
- Downloads last month
- 0
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The model has no pipeline_tag.
Model tree for Bachstelze/smolSynformerPeft
Base model
HuggingFaceTB/SmolLM2-135M