pavel-tolstyko commited on
Commit
d27481c
1 Parent(s): 77cfdbe

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -0
README.md ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - TinyLlama/TinyLlama-1.1B-Chat-v1.0
4
+ ---
5
+ # Model Card
6
+
7
+ ## Model Description
8
+
9
+ This is a Large Language Model (LLM) trained on a dataset of DIBT/10k_prompts_ranked.
10
+
11
+ ## Evaluation Results
12
+
13
+ ### Hellaswag
14
+
15
+ Passed argument batch_size = auto:4.0. Detecting largest batch size
16
+ Determined largest batch size: 64
17
+ Passed argument batch_size = auto:4.0. Detecting largest batch size
18
+ Determined largest batch size: 64
19
+ hf (pretrained=EleutherAI/pythia-160m,revision=step100000,dtype=float), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: auto:4 (64,64,64,64,64)
20
+ | Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr|
21
+ |---------|------:|------|-----:|--------|---|-----:|---|-----:|
22
+ |hellaswag| 1|none | 0|acc |↑ |0.2872|± |0.0045|
23
+ | | |none | 0|acc_norm|↑ |0.3082|± |0.0046|
24
+
25
+ ## How to Use
26
+
27
+ To use this model, simply download the checkpoint and load it into your preferred deep learning framework.