duyhv1411/Llama-1.1B-qlora-ft

This model is an advanced iteration of the powerful TinyLlama/TinyLlama-1.1B-Chat-v1.0, specifically fine-tuned to enhance its capabilities in generic domains.

âš¡ Quantized GGUF

How to use


# Use a pipeline as a high-level helper

from transformers import pipeline

prompt = """<|user|>
Hello, how are you?</s>
<|assistant|>
"""

# Run our instruction-tuned model
pipe = pipeline(task="text-generation", model="duyhv1411/Llama-1.1B-qlora-ft", return_full_text=False,)
pipe(prompt)[0]["generated_text"]
Downloads last month
13
Safetensors
Model size
1.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for duyhv1411/Llama-1.1B-qlora-ft

Unable to build the model tree, the base model loops to the model itself. Learn more.