aberrio's picture
Add revisions and fixes
a3aa361 verified
|
raw
history blame
4.23 kB
metadata
pipeline_tag: text-generation
inference: false
library_name: llama.cpp
license: cc-by-nc-sa-4.0
license_name: creative-commons
license_link: https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en
language:
  - en
tags:
  - text-generation
  - artificial-intelligence
  - not-for-all-audiences

Valerie v0.1 Model Card

Overview

Valerie v0.1 is a custom language model created using llama.cpp (commit: 532c173) with a context length of 256 tokens, embedding length of 256, 8 heads, and 16 layers. This model was pretrained on a dataset consisting of female V's dialog from Cyberpunk 2077, extracted using the Voice Over Subtitle Map mod.

The ggml-valerie-v0.1-256x32-f32-LATEST.gguf release represents a single pass through all 51443 samples, completing one iteration over the entire dataset, and took approximately 3 hours for training.

Model Information

Model name Adam iteration Model filename Vocabulary size
Valerie v0.1 Checkpoint 950 chk-valerie-v0.1-256x32-950.gguf 32,000
Valerie v0.1 Model 1700 ggml-valerie-v0.1-256x32-f32-LATEST.gguf 32,000

Files and versions

  • ggml-vocab-mistral.gguf: Extracted Mistral 7B model vocabulary
  • ggml-valerie-v0.1-256x32-f32-950.gguf: The pretrained model checkpoint version 950.
  • ggml-valerie-v0.1-256x32-f32-LATEST.gguf: The latest pretrained model checkpoint.

Settings

  • Vocabulary size: 32,000
  • Context length: 256 tokens
  • Embedding length: 256
  • Heads: 8
  • Layers: 16
  • Batch size: 32
  • Seed: 1
  • Saved checkpoint every 50 iterations

Usage

To use Valerie v0.1, follow these steps:

  1. Clone the llama.cpp library
git clone https://github.com/ggerganov/llama.cpp

Reference the llama.cpp README.md for more information about building. You can build using raw CPU or even OpenBLAS. CUDA, ROCm, Vulkan, and other backends are also available.

Arch Linux Example:

# CPU build using BLAS backend on Arch Linux
sudo pacman -S openblas openblas64
make LLAMA_OPENBLAS=1
  1. Download the latest model.
wget https://huggingface.co/teleprint-me/cyberpunk-valerie-v0.1/resolve/main/ggml-valerie-v0.1-256x32-f32-LATEST.gguf?download=true -O 
ggml-valerie-v0.1-256x32-f32-LATEST.gguf

This will download the latest available base model.

  1. Perform inference with the latest model checkpoint using the provided command:
./main -m models/valerie/v0.1/ggml-valerie-v0.1-256x32-f32-LATEST.gguf --color -e -s 1 -c 4096

Citations

When using Valerie v0.1 in your research, please remember to cite the following:

Contributors

Austin Berrio (teleprint-me) - Created and trained Valerie v0.1 using llama.cpp and the referenced dataset.

Community

Join the community of fellow language model enthusiasts and researchers by sharing your knowledge, asking questions, and collaborating on projects related to creating custom models using llama.cpp.

License

Valerie v0.1 is released under the CC-BY-NC-SA-3.0 license. You are free to use, modify, and redistribute this model for non-commercial purposes, but you must provide attribution to the original authors and release any derived works under the same license.