legraphista's picture
Upload imatrix.log with huggingface_hub
bf97699 verified
build: 3787 (6026da52) with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
llama_model_loader: loaded meta data with 34 key-value pairs and 771 tensors from Qwen2.5-32B-Instruct-IMat-GGUF/Qwen2.5-32B-Instruct.Q8_0.gguf.hardlink.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Qwen2.5 32B Instruct
llama_model_loader: - kv 3: general.finetune str = Instruct
llama_model_loader: - kv 4: general.basename str = Qwen2.5
llama_model_loader: - kv 5: general.size_label str = 32B
llama_model_loader: - kv 6: general.license str = apache-2.0
llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-3...
llama_model_loader: - kv 8: general.base_model.count u32 = 1
llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 32B
llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen
llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-32B
llama_model_loader: - kv 12: general.tags arr[str,2] = ["chat", "text-generation"]
llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"]
llama_model_loader: - kv 14: qwen2.block_count u32 = 64
llama_model_loader: - kv 15: qwen2.context_length u32 = 32768
llama_model_loader: - kv 16: qwen2.embedding_length u32 = 5120
llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 27648
llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 40
llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 8
llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 22: general.file_type u32 = 7
llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
llama_model_loader: - kv 33: general.quantization_version u32 = 2
llama_model_loader: - type f32: 321 tensors
llama_model_loader: - type q8_0: 450 tensors
llm_load_vocab: special tokens cache size = 22
llm_load_vocab: token to piece cache size = 0.9310 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = qwen2
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 152064
llm_load_print_meta: n_merges = 151387
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 5120
llm_load_print_meta: n_layer = 64
llm_load_print_meta: n_head = 40
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 5
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 27648
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = Q8_0
llm_load_print_meta: model params = 32.76 B
llm_load_print_meta: model size = 32.42 GiB (8.50 BPW)
llm_load_print_meta: general.name = Qwen2.5 32B Instruct
llm_load_print_meta: BOS token = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token = 151645 '<|im_end|>'
llm_load_print_meta: PAD token = 151643 '<|endoftext|>'
llm_load_print_meta: LF token = 148848 'ÄĬ'
llm_load_print_meta: EOT token = 151645 '<|im_end|>'
llm_load_print_meta: max token length = 256
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
llm_load_tensors: ggml ctx size = 0.68 MiB
llm_load_tensors: offloading 24 repeating layers to GPU
llm_load_tensors: offloaded 24/65 layers to GPU
llm_load_tensors: CPU buffer size = 33202.08 MiB
llm_load_tensors: CUDA0 buffer size = 11859.09 MiB
..................................................................................................
llama_new_context_with_model: n_ctx = 512
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA_Host KV buffer size = 80.00 MiB
llama_kv_cache_init: CUDA0 KV buffer size = 48.00 MiB
llama_new_context_with_model: KV self size = 128.00 MiB, K (f16): 64.00 MiB, V (f16): 64.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 0.58 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 1095.91 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 11.01 MiB
llama_new_context_with_model: graph nodes = 2246
llama_new_context_with_model: graph splits = 564
system_info: n_threads = 25 (n_threads_batch = 25) / 32 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | AVX512_BF16 = 1 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 |
compute_imatrix: tokenizing the input ..
compute_imatrix: tokenization took 133.025 ms
compute_imatrix: computing over 128 chunks with batch_size 512
compute_imatrix: 3.00 seconds per pass - ETA 6.38 minutes
[1]4.1560,[2]3.0665,[3]2.9517,[4]3.2548,[5]3.1691,[6]2.9100,[7]3.0862,[8]3.1081,[9]3.4824,[10]3.4568,[11]3.4220,[12]3.7581,[13]4.2098,[14]4.4407,[15]4.8416,[16]5.1278,[17]5.3078,[18]5.6685,[19]5.4837,[20]5.6021,[21]5.6072,[22]5.6204,[23]5.5037,[24]5.6515,[25]5.8225,[26]5.7396,[27]5.5998,[28]5.3374,[29]5.2203,[30]5.2234,[31]5.1007,[32]4.9306,[33]4.8367,[34]4.7753,[35]4.7505,[36]4.7283,[37]4.7369,[38]4.7813,[39]4.7649,[40]4.8958,[41]4.9472,[42]4.9370,[43]4.8543,[44]4.9253,[45]4.9489,[46]5.0082,[47]4.9562,[48]5.0249,[49]5.1128,[50]5.1871,[51]5.1247,[52]5.2021,[53]5.3294,[54]5.4197,[55]5.4826,[56]5.5589,[57]5.6245,[58]5.6749,[59]5.7148,[60]5.7453,[61]5.7429,[62]5.7251,[63]5.7685,[64]5.8387,[65]5.8086,[66]5.8180,[67]5.8319,[68]5.7826,[69]5.7522,[70]5.7460,[71]5.7287,[72]5.7225,[73]5.7391,[74]5.6992,[75]5.6603,[76]5.6317,[77]5.6276,[78]5.6201,[79]5.6068,[80]5.5538,[81]5.5779,[82]5.5754,[83]5.5485,[84]5.5668,[85]5.5815,[86]5.5579,[87]5.5423,[88]5.5421,[89]5.5634,[90]5.5886,[91]5.5912,[92]5.5672,[93]5.5436,[94]5.5110,[95]5.4881,[96]5.4697,[97]5.4427,[98]5.4168,[99]5.4007,[100]5.4151,[101]5.4444,[102]5.5191,[103]5.5919,[104]5.6543,[105]5.7461,[106]5.8121,[107]5.8403,[108]5.8338,[109]5.8432,[110]5.8266,[111]5.7756,[112]5.7168,[113]5.6634,[114]5.7083,[115]5.7224,[116]5.7371,[117]5.7607,[118]5.7935,[119]5.7975,[120]5.8035,[121]5.8247,[122]5.8005,[123]5.8294,[124]5.8193,[125]5.8140,[126]5.8119,[127]5.7913,[128]5.7854,
Final estimate: PPL = 5.7854 +/- 0.07667
llama_perf_context_print: load time = 6000.40 ms
llama_perf_context_print: prompt eval time = 361055.92 ms / 65536 tokens ( 5.51 ms per token, 181.51 tokens per second)
llama_perf_context_print: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
llama_perf_context_print: total time = 365230.35 ms / 65537 tokens