• Developed by: lwef
  • License: apache-2.0
  • Finetuned from model : beomi/Llama-3-Open-Ko-8B

korean dialogue summary fine-tuned model

how to use

prompt_template = '''
μ•„λž˜ λŒ€ν™”λ₯Ό μš”μ•½ν•΄ μ£Όμ„Έμš”. λŒ€ν™” ν˜•μ‹μ€ '#λŒ€ν™” μ°Έμ—¬μž#: λŒ€ν™” λ‚΄μš©'μž…λ‹ˆλ‹€.
### λŒ€ν™” >>>{dialogue}

### μš”μ•½ >>>'''

if True:
    from unsloth import FastLanguageModel
    model, tokenizer = FastLanguageModel.from_pretrained(
        model_name = "lwef/llm-bench-upload-1", # YOUR MODEL YOU USED FOR TRAINING
        max_seq_length = 2048,
        dtype = None,
        load_in_4bit = True,
    )
    FastLanguageModel.for_inference(model) # Enable native 2x faster inference
dialogue = '''#P01#: μ•„ ν–‰μ‚Ά 과제 λ„ˆλ¬΄ μ–΄λ €μ›Œ... 5μͺ½ μ“Έκ²Œ μ—†λŠ”λ° γ…‘γ…‘ #P02#: λͺ¬λƒλͺ¬λƒλ„ˆκ°€λ”μž˜μ¨ γ…Žγ…Ž #P01#: 5μͺ½ λŒ€μΆ© μ˜μ‹μ˜ νλ¦„λŒ€λ‘œ μ­‰ 써야지..이제 1μͺ½μ”€ ;; 5μͺ½ μ—λŠ” λ„€μ€„λ§Œ 적어야지 #P02#: μ•ˆλŒ€... λ­”κ°€λΆ„λŸ‰μ€‘μš”ν• κ±°κ°™μ•„ κ±°μ˜κ½‰μ±„μ›Œμ„œμ“°μ…ˆ #P01#: λͺ»μ¨ 쓸말업써 #P02#: μ΄κ±°μ€‘κ°„λŒ€μ²΄μ—¬?? #P01#: γ„΄γ„΄ κ·Έλƒ₯ κ³Όμ œμž„ κ·Έλž˜μ„œ 더 μ§œμ¦λ‚¨'''

formatted_prompt = prompt_template.format(dialogue=dialogue)

# ν† ν¬λ‚˜μ΄μ§•
inputs = tokenizer(
    formatted_prompt,
    return_tensors="pt"
).to("cuda")

outputs = model.generate(
    **inputs,
    max_new_tokens = 128,
    eos_token_id=tokenizer.eos_token_id, # EOS 토큰을 μ‚¬μš©ν•˜μ—¬ λͺ…μ‹œμ μœΌλ‘œ 좜λ ₯의 끝을 지정.
    use_cache = True
)
decoded_outputs = tokenizer.batch_decode(outputs, skip_special_tokens=True)
result = decoded_outputs[0]

print(result)
result = result.split('### μš”μ•½ >>>')[-1].strip()
print(result)

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

I highly recommend checking the Unsloth notebook.

Downloads last month
3,503
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for lwef/llm-bench-upload-1

Finetuned
(23)
this model
Quantizations
1 model