Text Generation
Transformers
Safetensors
English
llama
nlp
llm
text-generation-inference
Inference Endpoints
File size: 2,141 Bytes
a0ea6b3
 
 
 
afb0ba5
a0ea6b3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
588a734
a0ea6b3
 
 
 
 
 
 
7462eaf
a0ea6b3
 
 
 
 
 
 
 
 
7462eaf
 
 
 
 
a0ea6b3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---
license: apache-2.0
datasets:
- WizardLM/WizardLM_evol_instruct_V2_196k
- icybee/share_gpt_90k_v1
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- nlp
- llm
---
# AmberChat


We present AmberChat, an instruction following model finetuned from [LLM360/Amber](https://huggingface.co/LLM360/Amber).

## Model Description

- **Model type:** Language model with the same architecture as LLaMA-7B
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Resources for more information:**
  - [Research paper](https://arxiv.org/)
  - [GitHub Repo](https://github.com/LLM360)
  - [Amber pretraining data](https://huggingface.co/)


# Loading AmberChat 

```python
from transformers import LlamaTokenizer, LlamaForCausalLM

tokenizer = LlamaTokenizer.from_pretrained("LLM360/AmberChat")
model = LlamaForCausalLM.from_pretrained("LLM360/AmberChat")

input_text = "How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids

outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```

# AmberChat Finetuning Details

## DataMix
| Subset      | Number of rows |  License   |
| ----------- | ----------- | ----------- |
| WizardLM/WizardLM_evol_instruct_V2_196k      | 143k       |  |
| icybee/share_gpt_90k_v1   | 90k        | cc0-1.0 |
| Total | 233k |  |

## Hyperparameters
| Hyperparameter      | Value |
| ----------- | ----------- |
| Total Parameters      | 6.7B       |
| Hidden Size   | 4096        |
| Intermediate Size (MLPs)   | 11008        |
| Number of Attention Heads   | 32        |
| Number of Hidden Lyaers  | 32        |
| RMSNorm ɛ  | 1e^-6        |
| Max Seq Length   | 2048        |
| Vocab Size | 32000 |


# Evaluation

| Model                                                | MT-Bench                                                  | 
|------------------------------------------------------|------------------------------------------------------------|
| LLM360/Amber 359 | 2.48750 | 
| **LLM360/AmberChat** | **5.428125** |

# Citation

**BibTeX:**

```bibtex
@article{xxx,
  title={XXX},
  author={XXX},
  journal={XXX},
  year={2023}
}
```