aashish1904's picture
Upload README.md with huggingface_hub
187c717 verified
|
raw
history blame
2.53 kB
---
language: en
license: apache-2.0
tags:
- causal-lm
- transformers
- llama
- reflex-ai
---
[![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
# QuantFactory/AMD-Llama-350M-Upgraded-GGUF
This is quantized version of [reflex-ai/AMD-Llama-350M-Upgraded](https://huggingface.co/reflex-ai/AMD-Llama-350M-Upgraded) created using llama.cpp
# Original Model Card
# AMD Llama 350M Upgraded
## Model Description
The **AMD Llama 350M Upgraded** is a transformer-based causal language model built on the Llama architecture, designed to generate human-like text. This model has been upgraded from the original AMD Llama 135M model to provide enhanced performance with an increased parameter count of 332 million. It is suitable for various natural language processing tasks, including text generation, completion, and conversational applications.
## Model Details
- **Model Type**: Causal Language Model
- **Architecture**: Llama
- **Number of Parameters**: 332 million
- **Input Size**: Variable-length input sequences
- **Output Size**: Variable-length output sequences
## Usage
To use the AMD Llama 350M Upgraded model, you can utilize the `transformers` library. Here’s a sample code snippet to get started:
```python
import torch
from transformers import LlamaForCausalLM, LlamaTokenizer
# Load the tokenizer and model
model_name = "reflex-ai/AMD-Llama-350M-Upgraded"
tokenizer = LlamaTokenizer.from_pretrained(model_name)
model = LlamaForCausalLM.from_pretrained(model_name)
# Set the model to evaluation mode
model.eval()
# Function to generate text
def generate_text(prompt, max_length=50):
inputs = tokenizer.encode(prompt, return_tensors='pt', padding=True, truncation=True)
attention_mask = (inputs != tokenizer.pad_token_id).long()
if torch.cuda.is_available():
inputs = inputs.to('cuda')
attention_mask = attention_mask.to('cuda')
with torch.no_grad():
outputs = model.generate(inputs, attention_mask=attention_mask, max_length=max_length, num_return_sequences=1)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
return generated_text
# Example usage
prompt = "Once upon a time in a land far away,"
generated_output = generate_text(prompt, max_length=100)
print(generated_output)