llama-400m-lora-finetuned
tested on smollm corpus
Model Details
This model is a LoRA-finetuned version of YongganFu/Llama-400M-12L.
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "lxaw/llama-400m-lora-finetuned"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
input_text = "What is the capital of France?"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(inputs.input_ids, max_length=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Example Output
Input: What is the capital of France?
Output: What is the capital of France?
The capital of France is Paris, which is located in the southwestern part of the country. It is also known as the "Capital of Culture," which means that it is the most important cultural
- Downloads last month
- 0
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no pipeline_tag.