File size: 2,983 Bytes
b5d308d
 
 
 
 
 
 
 
 
4a64513
b5d308d
 
 
4a64513
b5d308d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d589b8b
 
 
 
 
 
 
 
 
 
b5d308d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
model-index:
- name: jhana-mistral-GGUF
  results: []
---

# jhana-mistral-GGUF

This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) specifically trained for generating guided meditations. The fine-tuning was conducted on the "jhana-guided-meditations-collection" dataset available on Hugging Face, utilizing the QLoRA fine-tuning approach.

## Model description

The model utilizes the LlamaTokenizer and is quantized for efficient load and execution. It is intended for generating mindful meditation scripts by understanding and generating contextually relevant content. This version has been optimized for better performance and lower resource utilization during inference.

## Intended uses & limitations

This model is intended for generating text related to guided meditations. It may not perform well on unrelated tasks or general-purpose language understanding due to its specialized training.

## Training and evaluation data

The model was trained on the "jhana-guided-meditations-collection" dataset, which consists of various guided meditation scripts. The data was preprocessed and tokenized using the LlamaTokenizer.

## Training procedure

### Training hyperparameters

- Learning Rate: 0.0002
- Batch Size: 8 for training, 8 for evaluation
- Optimizer: Adam with betas=(0.9, 0.999) and epsilon=1e-08
- Scheduler: Cosine learning rate scheduler
- Training Steps: 250
- Mixed Precision Training: Utilized Native AMP

### Training results

Training resulted in a model capable of generating coherent and contextually relevant meditation scripts, improving upon the base model's capabilities in this specific domain.

### Framework versions

- PEFT: 0.10.0
- Transformers: 4.40.0.dev0
- Pytorch: 2.2.2+cu121
- Datasets: 2.18.0
- Tokenizers: 0.15.2

## Quantization with llama.cpp

The model was quantized to enhance its efficiency and reduce its size, making it more suitable for deployment in various environments, including those with limited resources. The quantization process was performed using `llama.cpp`, following the steps outlined by Maxime Labonne in [Quantize Llama models with GGUF and llama.cpp](https://mlabonne.github.io/blog/posts/Quantize_Llama_2_models_using_ggml.html).

The process involved:
- Cloning the `llama.cpp` repository and setting it up with the required dependencies.
- Downloading the model to be quantized.
- Using the `llama.cpp/convert.py` script to convert the model to fp16 format, followed by quantization, significantly reducing the model's size while retaining its performance capabilities.

The quantization resulted in a compressed model with a significant reduction in size from 13813.02 MB to 4892.99 MB, enhancing its loading and inference speeds without compromising on the generation quality.