Text Generation
GGUF
English
code
Inference Endpoints
File size: 1,486 Bytes
b7f7deb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9298d7d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
---
base_model: smallcloudai/Refact-1_6B-fim
license: bigscience-openrail-m
model_creator: Small Magellanic Cloud AI
model_name: Refact-1.6B
pipeline_tag: text-generation
prompt_template: '<fim_prefix>{prefix}<fim_suffix>{suffix}<fim_middle>'
pretrain-datasets:
- books
- arxiv
- c4
- falcon-refinedweb
- wiki
- github-issues
- stack_markdown
- self-made dataset of permissive github code
datasets:
- bigcode/the-stack-dedup
- rombodawg/2XUNCENSORED_MegaCodeTraining188k
- bigcode/commitpackft
tags:
- code
language:
- en
---
# Refact-1.6B-fim-GGUF
- Model creator: [Small Magellanic Cloud AI](https://huggingface.co/smallcloudai)
- Original model: [Refact-1.6B](https://huggingface.co/smallcloudai/Refact-1_6B-fim)


## Description
This repository contains quantized GGUF format model files for [Refact-1.6B](https://huggingface.co/smallcloudai/Refact-1_6B-fim).


## Prompt: fill in the middle
```
<fim_prefix>def print_hello_world():\n    """<fim_suffix>\n    print("Hello world!")<fim_middle>
```


## Prompt: chat (experimental)
```
<empty_output>SYSTEM You are a programming assistant
<empty_output>USER How do I sort a list in Python?
<empty_output>ASSISTANT
```


## Example `llama.cpp` command
```shell
./main -m refact-1_6b-Q4_K_M.gguf -c 4096 -n -1 -p '<fim_prefix>{prefix}<fim_suffix>{suffix}<fim_middle>'
```
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)