Hypernova-experimental

Quantized to GGUF using llama.cpp

Tried some new stuff this time around. Very different outcome than I expected. This is an experimental model that was created for the development of NovaAI.

Good at chatting and some RP. Sometimes gets characters mixed up. Can occasionally struggle with context.

Prompt Template: Alpaca

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{prompt}

### Response:

Models Merged

The following models were included in the merge:

Some finetuning done as well

Downloads last month
24
GGUF
Model size
13B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for theNovaAI/Hypernova-experimental-GGUF

Quantized
(13)
this model

Collection including theNovaAI/Hypernova-experimental-GGUF