kikikara commited on
Commit
dcb1e2b
Β·
verified Β·
1 Parent(s): ea0fa7a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -50,7 +50,7 @@ from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
50
  tokenizer = AutoTokenizer.from_pretrained("kikikara/llama_with_eeve_new_03_150m")
51
  model = AutoModelForCausalLM.from_pretrained("kikikara/llama_with_eeve_new_03_150m")
52
 
53
- question = "κ³ κΈ° λ§›μžˆκ²Œ κ΅½λŠ” 법을 μ•Œλ €μ€˜"
54
 
55
  prompt = f"### System:\n당신은 λΉ„λ„λ•μ μ΄κ±°λ‚˜, μ„±μ μ΄κ±°λ‚˜, λΆˆλ²•μ μ΄κ±°λ‚˜ λ˜λŠ” μ‚¬νšŒ ν†΅λ…μ μœΌλ‘œ ν—ˆμš©λ˜μ§€ μ•ŠλŠ” λ°œμ–Έμ€ ν•˜μ§€ μ•ŠμŠ΅λ‹ˆλ‹€.\nμ‚¬μš©μžμ™€ 즐겁게 λŒ€ν™”ν•˜λ©°, μ‚¬μš©μžμ˜ 응닡에 κ°€λŠ₯ν•œ μ •ν™•ν•˜κ³  μΉœμ ˆν•˜κ²Œ μ‘λ‹΅ν•¨μœΌλ‘œμ¨ μ΅œλŒ€ν•œ 도와주렀고 λ…Έλ ₯ν•©λ‹ˆλ‹€.\n\n\n### User:\n {question}"
56
  pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=400, repetition_penalty=1.12)
@@ -59,4 +59,3 @@ result = pipe(prompt)
59
  print(result[0]['generated_text'])```
60
 
61
 
62
- ### How to use
 
50
  tokenizer = AutoTokenizer.from_pretrained("kikikara/llama_with_eeve_new_03_150m")
51
  model = AutoModelForCausalLM.from_pretrained("kikikara/llama_with_eeve_new_03_150m")
52
 
53
+ question = "λ„ˆλŠ” λˆ„κ΅¬μ•Ό?"
54
 
55
  prompt = f"### System:\n당신은 λΉ„λ„λ•μ μ΄κ±°λ‚˜, μ„±μ μ΄κ±°λ‚˜, λΆˆλ²•μ μ΄κ±°λ‚˜ λ˜λŠ” μ‚¬νšŒ ν†΅λ…μ μœΌλ‘œ ν—ˆμš©λ˜μ§€ μ•ŠλŠ” λ°œμ–Έμ€ ν•˜μ§€ μ•ŠμŠ΅λ‹ˆλ‹€.\nμ‚¬μš©μžμ™€ 즐겁게 λŒ€ν™”ν•˜λ©°, μ‚¬μš©μžμ˜ 응닡에 κ°€λŠ₯ν•œ μ •ν™•ν•˜κ³  μΉœμ ˆν•˜κ²Œ μ‘λ‹΅ν•¨μœΌλ‘œμ¨ μ΅œλŒ€ν•œ 도와주렀고 λ…Έλ ₯ν•©λ‹ˆλ‹€.\n\n\n### User:\n {question}"
56
  pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=400, repetition_penalty=1.12)
 
59
  print(result[0]['generated_text'])```
60
 
61