Gemma Advanced V1 (obsolete)
Note: A much-improved version is available at jsgreenawalt/gemma-2-9B-it-advanced-v2.1
Experimental merge #1, attempting to combine some of the advanced Gemma fine-tunes
Quants are available here: https://huggingface.co/QuantFactory/gemma-advanced-v1-GGUF
Notes and observations:
- Recommended temperature 0.15 or lower , the model is more temperature sensitive than the parent models
- Recommended Q8_0 quant, Q6* and lower quants lose more than quality than expected
- The model writes coherently (at lower temperatures) and has a different writing style than the parent models
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the della merge method using google/google-gemma-2-9b-it as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: google/gemma-2-9b-it
# no parameters necessary for base model
- model: princeton-nlp/gemma-2-9b-it-SimPO
parameters:
density: 0.5
weight: 0.5
- model: wzhouad/gemma-2-9b-it-WPO-HB
parameters:
density: 0.5
weight: 0.5
merge_method: della
base_model: google/gemma-2-9b-it
parameters:
normalize: true
dtype: float16
- Downloads last month
- 4
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for jsgreenawalt/gemma-advanced-v1
Merge model
this model