Nexesenex's picture
Update README.md
3b5058f
|
raw
history blame
461 Bytes
metadata
license: llama2

CodeLlama 2 7b

With Guanaco Lora (Tim Dettmers), merged by Varunk29.

Then

With Mistral AI 7b 0.1 delta bits compared to Llama2 (extracted by Undi95), merged by me.


Base model (CodeLlama) training context : 16k (max context up to 96k with the base ROPE)

Mistral injection training context : 8k (Sliding Windows Attention is likely inoperant on such a merge/injection)


For test and amusement only.

Prompt : Alpaca works.