How do you infer?

#1
by segmond - opened

I don't think llama.cpp supports this model, do you have a branch I could use to run this?

hi, here is the WIP branch: https://github.com/ggerganov/llama.cpp/pull/7531

llm_load_print_meta: model ftype      = Q3_K - Small
llm_load_print_meta: model params     = 51.57 B
llm_load_print_meta: model size       = 20.76 GiB (3.46 BPW) 
llm_load_print_meta: general.name     = ai21labs_AI21 Jamba 1.5 Mini
llm_load_print_meta: BOS token        = 1 '<|startoftext|>'
llm_load_print_meta: EOS token        = 2 '<|endoftext|>'
llm_load_print_meta: UNK token        = 3 '<|unk|>'
llm_load_print_meta: PAD token        = 0 '<|pad|>'
llm_load_print_meta: LF token         = 1554 '<0x0A>'
llm_load_print_meta: EOT token        = 2 '<|endoftext|>'
llm_load_print_meta: max token length = 96
llm_load_tensors: ggml ctx size =    0.22 MiB
llm_load_tensors:        CPU buffer size = 21255.05 MiB
..............................................................
llama_new_context_with_model: n_ctx      = 262144
llama_new_context_with_model: n_batch    = 2048
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_past_init:        CPU past cache size =  4112.63 MiB
llama_new_context_with_model: SSM state size =    16.62 MiB, R (f32):    2.62 MiB, S (f32):   14.00 MiB
llama_new_context_with_model: KV cache size  =  4096.00 MiB, K (f16): 2048.00 MiB, V (f16): 2048.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     0.25 MiB
llama_new_context_with_model:        CPU compute buffer size = 16920.60 MiB
llama_new_context_with_model: graph nodes  = 2066
llama_new_context_with_model: graph splits = 1

system_info: n_threads = 16 (n_threads_batch = 16) / 32 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | 
sampling: 
    repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
    top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800
    mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampling order: 
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature 
generate: n_ctx = 262144, n_batch = 2048, n_predict = -1, n_keep = 1


 here's an e-mail with a password," he said. "Now, let's check the email."

They logged into the email account. Sure enough, there was an email with a password.," he said. "Let's see if this works."

He entered the password into the lock. The lock clicked open.," he said, a look of awe on his face. "It worked!"

," he said, a look of awe on his face. "

llama_print_timings:        load time =    3936.91 ms
llama_print_timings:      sample time =       5.25 ms /   106 runs   (    0.05 ms per token, 20190.48 tokens per second)
llama_print_timings: prompt eval time =     439.11 ms /     6 tokens (   73.19 ms per token,    13.66 tokens per second)
llama_print_timings:        eval time =   20514.34 ms /   105 runs   (  195.37 ms per token,     5.12 tokens per second)
llama_print_timings:       total time =   20991.83 ms /   111 tokens

Sign up or log in to comment