File size: 3,442 Bytes
d044f67 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 |
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- role-play
- fine-tuned
- qwen2
base_model: Qwen/Qwen2-1.5B
library_name: transformers
---
[![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
# QuantFactory/oxy-1-micro-GGUF
This is quantized version of [oxyapi/oxy-1-micro](https://huggingface.co/oxyapi/oxy-1-micro) created using llama.cpp
# Original Model Card
![Oxy 1 Micro](https://cdn-uploads.huggingface.co/production/uploads/63c2d8376e6561b339d998b9/fX1qGkR-1BC1EV_sRkO_9.png)
## Introduction
**Oxy 1 Micro** is a fine-tuned version of the [Qwen2-1.5B](https://huggingface.co/Qwen/Qwen2-1.5B) language model, specialized for **role-play** scenarios. Despite its small size, it delivers impressive performance in generating engaging dialogues and interactive storytelling.
Developed by **Oxygen (oxyapi)**, with contributions from **TornadoSoftwares**, Oxy 1 Micro aims to provide an accessible and efficient language model for creative and immersive role-play experiences.
## Model Details
- **Model Name**: Oxy 1 Micro
- **Model ID**: [oxyapi/oxy-1-micro](https://huggingface.co/oxyapi/oxy-1-micro)
- **Base Model**: [Qwen/Qwen2-1.5B](https://huggingface.co/Qwen/Qwen2-1.5B)
- **Model Type**: Chat Completions
- **License**: Apache-2.0
- **Language**: English
- **Tokenizer**: [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
- **Max Input Tokens**: 32,768
- **Max Output Tokens**: 8,192
### Features
- **Fine-tuned for Role-Play**: Specially trained to generate dynamic and contextually rich role-play dialogues.
- **Efficient**: Compact model size allows for faster inference and reduced computational resources.
- **Parameter Support**:
- `temperature`
- `top_p`
- `top_k`
- `frequency_penalty`
- `presence_penalty`
- `max_tokens`
### Metadata
- **Owned by**: Oxygen (oxyapi)
- **Contributors**: TornadoSoftwares
- **Description**: A Qwen2-1.5B fine-tune for role-play; small model but still good.
## Usage
To utilize Oxy 1 Micro for text generation in role-play scenarios, you can load the model using the Hugging Face Transformers library:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("oxyapi/oxy-1-micro")
model = AutoModelForCausalLM.from_pretrained("oxyapi/oxy-1-micro")
prompt = "You are a wise old wizard in a mystical land. A traveler approaches you seeking advice."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=500)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## Performance
Performance benchmarks for Oxy 1 Micro are not available at this time. Future updates may include detailed evaluations on relevant datasets.
## License
This model is licensed under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0).
## Citation
If you find Oxy 1 Micro useful in your research or applications, please cite it as:
```
@misc{oxy1micro2024,
title={Oxy 1 Micro: A Fine-Tuned Qwen2-1.5B Model for Role-Play},
author={Oxygen (oxyapi)},
year={2024},
howpublished={\url{https://huggingface.co/oxyapi/oxy-1-micro}},
}
```
|