File size: 11,454 Bytes
1fe104a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
main: build = 3003 (d298382a)
main: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
main: seed  = 1716766865
llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors from Llama3-ChatQA-1.5-8B-IMat-GGUF/Llama3-ChatQA-1.5-8B.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = Llama3-ChatQA-1.5-8B
llama_model_loader: - kv   2:                          llama.block_count u32              = 32
llama_model_loader: - kv   3:                       llama.context_length u32              = 8192
llama_model_loader: - kv   4:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   7:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   8:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 1
llama_model_loader: - kv  11:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  12:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  14:                         tokenizer.ggml.pre str              = smaug-bpe
llama_model_loader: - kv  15:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  16:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  17:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  19:                tokenizer.ggml.eos_token_id u32              = 128001
llama_model_loader: - kv  20:                    tokenizer.chat_template str              = {{ bos_token }}{%- if messages[0]['ro...
llama_model_loader: - kv  21:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type  f16:  226 tensors
llm_load_vocab: special tokens definition check successful ( 256/128256 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 128256
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: n_ctx_train      = 8192
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 8192
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 8B
llm_load_print_meta: model ftype      = F16
llm_load_print_meta: model params     = 8.03 B
llm_load_print_meta: model size       = 14.96 GiB (16.00 BPW) 
llm_load_print_meta: general.name     = Llama3-ChatQA-1.5-8B
llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128001 '<|end_of_text|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:   no
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
llm_load_tensors: ggml ctx size =    0.30 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors:        CPU buffer size =  1002.00 MiB
llm_load_tensors:      CUDA0 buffer size = 14315.02 MiB
.........................................................................................
llama_new_context_with_model: n_ctx      = 512
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      CUDA0 KV buffer size =    64.00 MiB
llama_new_context_with_model: KV self size  =   64.00 MiB, K (f16):   32.00 MiB, V (f16):   32.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     0.49 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =   258.50 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =     9.01 MiB
llama_new_context_with_model: graph nodes  = 1030
llama_new_context_with_model: graph splits = 2

system_info: n_threads = 25 / 32 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | AVX512_BF16 = 1 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | 
compute_imatrix: tokenizing the input ..
compute_imatrix: tokenization took 198.763 ms
compute_imatrix: computing over 189 chunks with batch_size 512
compute_imatrix: 0.55 seconds per pass - ETA 1.73 minutes
[1]5.5379,[2]4.2604,[3]3.8769,[4]4.7854,[5]4.8156,[6]4.0973,[7]4.4325,[8]4.8732,[9]5.0451,
save_imatrix: stored collected data after 10 chunks in Llama3-ChatQA-1.5-8B-IMat-GGUF/imatrix.dat
[10]5.0631,[11]5.5003,[12]5.3789,[13]5.8285,[14]6.2268,[15]6.4722,[16]6.8534,[17]7.2681,[18]7.4198,[19]7.0782,
save_imatrix: stored collected data after 20 chunks in Llama3-ChatQA-1.5-8B-IMat-GGUF/imatrix.dat
[20]6.9814,[21]6.8167,[22]6.4535,[23]6.2212,[24]6.0214,[25]6.2230,[26]6.3250,[27]6.4800,[28]6.4423,[29]6.1705,
save_imatrix: stored collected data after 30 chunks in Llama3-ChatQA-1.5-8B-IMat-GGUF/imatrix.dat
[30]5.9908,[31]5.9112,[32]5.8855,[33]5.8623,[34]5.8742,[35]5.9606,[36]6.0615,[37]6.1978,[38]6.2610,[39]6.3864,
save_imatrix: stored collected data after 40 chunks in Llama3-ChatQA-1.5-8B-IMat-GGUF/imatrix.dat
[40]6.5592,[41]6.7632,[42]6.8866,[43]7.0488,[44]7.0439,[45]7.0809,[46]7.1623,[47]7.2816,[48]7.3168,[49]7.3888,
save_imatrix: stored collected data after 50 chunks in Llama3-ChatQA-1.5-8B-IMat-GGUF/imatrix.dat
[50]7.4099,[51]7.3661,[52]7.1987,[53]7.1230,[54]7.1023,[55]6.9628,[56]6.8335,[57]6.8427,[58]6.9167,[59]7.0103,
save_imatrix: stored collected data after 60 chunks in Llama3-ChatQA-1.5-8B-IMat-GGUF/imatrix.dat
[60]7.0712,[61]7.0300,[62]6.9374,[63]6.8442,[64]6.7532,[65]6.6802,[66]6.5717,[67]6.4531,[68]6.4243,[69]6.3643,
save_imatrix: stored collected data after 70 chunks in Llama3-ChatQA-1.5-8B-IMat-GGUF/imatrix.dat
[70]6.3744,[71]6.4125,[72]6.4348,[73]6.4352,[74]6.4698,[75]6.4065,[76]6.2850,[77]6.1718,[78]6.0963,[79]5.9844,
save_imatrix: stored collected data after 80 chunks in Llama3-ChatQA-1.5-8B-IMat-GGUF/imatrix.dat
[80]5.8879,[81]5.7904,[82]5.7283,[83]5.6832,[84]5.7078,[85]5.7533,[86]5.7654,[87]5.7501,[88]5.7441,[89]5.7588,
save_imatrix: stored collected data after 90 chunks in Llama3-ChatQA-1.5-8B-IMat-GGUF/imatrix.dat
[90]5.7898,[91]5.7871,[92]5.7989,[93]5.8200,[94]5.8462,[95]5.8372,[96]5.8650,[97]5.8730,[98]5.8785,[99]5.8935,
save_imatrix: stored collected data after 100 chunks in Llama3-ChatQA-1.5-8B-IMat-GGUF/imatrix.dat
[100]5.8926,[101]5.8860,[102]5.8938,[103]5.9202,[104]5.9412,[105]5.9380,[106]5.9664,[107]5.9930,[108]5.9533,[109]5.9600,
save_imatrix: stored collected data after 110 chunks in Llama3-ChatQA-1.5-8B-IMat-GGUF/imatrix.dat
[110]5.9524,[111]5.9278,[112]5.9181,[113]5.8898,[114]5.8573,[115]5.8296,[116]5.8003,[117]5.7703,[118]5.7423,[119]5.7852,
save_imatrix: stored collected data after 120 chunks in Llama3-ChatQA-1.5-8B-IMat-GGUF/imatrix.dat
[120]5.8020,[121]5.8241,[122]5.8670,[123]5.8976,[124]5.9497,[125]6.0091,[126]6.0623,[127]6.1093,[128]6.1734,[129]6.2466,
save_imatrix: stored collected data after 130 chunks in Llama3-ChatQA-1.5-8B-IMat-GGUF/imatrix.dat
[130]6.2292,[131]6.2497,[132]6.2611,[133]6.2829,[134]6.2720,[135]6.2806,[136]6.3134,[137]6.3258,[138]6.3446,[139]6.3687,
save_imatrix: stored collected data after 140 chunks in Llama3-ChatQA-1.5-8B-IMat-GGUF/imatrix.dat
[140]6.3830,[141]6.3885,[142]6.4090,[143]6.3848,[144]6.4083,[145]6.4348,[146]6.4522,[147]6.4599,[148]6.4739,[149]6.4918,
save_imatrix: stored collected data after 150 chunks in Llama3-ChatQA-1.5-8B-IMat-GGUF/imatrix.dat
[150]6.4795,[151]6.4745,[152]6.4854,[153]6.4931,[154]6.5394,[155]6.5287,[156]6.5323,[157]6.5733,[158]6.6175,[159]6.6809,
save_imatrix: stored collected data after 160 chunks in Llama3-ChatQA-1.5-8B-IMat-GGUF/imatrix.dat
[160]6.7411,[161]6.7571,[162]6.7753,[163]6.7901,[164]6.7885,[165]6.8196,[166]6.8253,[167]6.8271,[168]6.8374,[169]6.8606,
save_imatrix: stored collected data after 170 chunks in Llama3-ChatQA-1.5-8B-IMat-GGUF/imatrix.dat
[170]6.8616,[171]6.8588,[172]6.8711,[173]6.8457,[174]6.8452,[175]6.8364,[176]6.8381,[177]6.8440,[178]6.8483,[179]6.8433,
save_imatrix: stored collected data after 180 chunks in Llama3-ChatQA-1.5-8B-IMat-GGUF/imatrix.dat
[180]6.8314,[181]6.8437,[182]6.8311,[183]6.8107,[184]6.7766,[185]6.7852,[186]6.7738,[187]6.7698,[188]6.7375,[189]6.7122,
save_imatrix: stored collected data after 189 chunks in Llama3-ChatQA-1.5-8B-IMat-GGUF/imatrix.dat

llama_print_timings:        load time =    2209.94 ms
llama_print_timings:      sample time =       0.00 ms /     1 runs   (    0.00 ms per token,      inf tokens per second)
llama_print_timings: prompt eval time =   77902.33 ms / 96768 tokens (    0.81 ms per token,  1242.17 tokens per second)
llama_print_timings:        eval time =       0.00 ms /     1 runs   (    0.00 ms per token,      inf tokens per second)
llama_print_timings:       total time =   80773.33 ms / 96769 tokens

Final estimate: PPL = 6.7122 +/- 0.07586