File size: 2,550 Bytes
52764da
c871f06
 
 
 
66ed9cc
b3030ac
c871f06
 
50e0bfa
c871f06
50e0bfa
 
c871f06
50e0bfa
c871f06
50e0bfa
 
c871f06
50e0bfa
c871f06
 
50e0bfa
c871f06
 
 
50e0bfa
c871f06
727ee73
c871f06
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50e0bfa
c871f06
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
---
license: mit
language:
- en
thumbnail: "https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/2g6TFg4nLSigVy7n1wLYV.png"
---
# Created on 64GB of system RAM, and a Ryzen 5 5600...no GPU needed

![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/2g6TFg4nLSigVy7n1wLYV.png)

### Passthrough was used to create this model.


[Kronos](https://www.theoi.com/Titan/TitanKronos.html) was a titan, and this model is named after him for its sheer size.

By concatenating layers from different LLMs, the method used for this model, passthrough. can produce models with an exotic number of parameters (e.g., 9B with two 7B parameter models). These models are often referred to as "frankenmerges" or "Frankenstein models" by the community.


Many thanks to [Abacaj](https://huggingface.co/abacaj) for providing the [fine tuned weights](https://huggingface.co/abacaj/phi-2-super) that were used in the creation of this base model. You can find the full script for how the model was merged [here](https://huggingface.co/Replete-AI/Kronos-670B/blob/main/mergekit_config.yml)...thanks to [KatyTheCutie](https://huggingface.co/KatyTheCutie) for inspring me to test out this script.

## This idea was brought to me by [The Face of Goonery](https://huggingface.co/The-Face-Of-Goonery), also known as Caleb Morgan.
# How to run inference:

```python
import transformers
import torch

if __name__ == "__main__":
  model_name = "Replete-AI/Kronos-703B"
  tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
  
  model = (
      transformers.AutoModelForCausalLM.from_pretrained(
          model_name,
      )
      .to("cuda:0")
      .eval()
  )
  
  messages = [
      {"role": "user", "content": "Hello, who are you?"}
  ]
  inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
  input_ids_cutoff = inputs.size(dim=1)
  
  with torch.no_grad():
      generated_ids = model.generate(
          input_ids=inputs,
          use_cache=True,
          max_new_tokens=512,
          temperature=0.2,
          top_p=0.95,
          do_sample=True,
          eos_token_id=tokenizer.eos_token_id,
          pad_token_id=tokenizer.pad_token_id,
      )
  
  completion = tokenizer.decode(
      generated_ids[0][input_ids_cutoff:],
      skip_special_tokens=True,
  )
  
  print(completion)
```

# Chat template

The model uses the same chat template as found in Mistral instruct models:

# "[Join the Replete AI Discord here!](https://discord.gg/tG5aY4EX4T)"