kobart-news
- This model is a kobart fine-tuned on the λ¬Έμμμ½ ν μ€νΈ/μ λ¬ΈκΈ°μ¬ using Ainize Teachable-NLP.
Usage
Python Code
from transformers import PreTrainedTokenizerFast, BartForConditionalGeneration
# Load Model and Tokenize
tokenizer = PreTrainedTokenizerFast.from_pretrained("ainize/kobart-news")
model = BartForConditionalGeneration.from_pretrained("ainize/kobart-news")
# Encode Input Text
input_text = 'κ΅λ΄ μ λ°μ μΈ κ²½κΈ°μΉ¨μ²΄λ‘ μκ° κ±΄λ¬Όμ£Όμ μμ΅λ μ κ΅μ μΈ κ°μμΈλ₯Ό 보μ΄κ³ μλ κ²μΌλ‘ λνλ¬λ€. μμ΅ν λΆλμ° μ°κ΅¬κ°λ°κΈ°μ
μκ°μ 보μ°κ΅¬μλ νκ΅κ°μ μ ν΅κ³λ₯Ό λΆμν κ²°κ³Ό μ κ΅ μ€λν μκ° μμμ
μλ(λΆλμ°μμ λ°μνλ μλμμ
, κΈ°νμμ
μμ μ λ° κ²½λΉλ₯Ό 곡μ ν μμλ)μ΄ 1λΆκΈ° γ‘λΉ 3λ§4200μμμ 3λΆκΈ° 2λ§5800μμΌλ‘ κ°μνλ€κ³ 17μΌ λ°νλ€. μλκΆ, μΈμ’
μ, μ§λ°©κ΄μμμμ μμμ
μλμ΄ κ°μ₯ λ§μ΄ κ°μν μ§μμ 3λΆκΈ° 1λ§3100μμ κΈ°λ‘ν μΈμ°μΌλ‘, 1λΆκΈ° 1λ§9100μ λλΉ 31.4% κ°μνλ€. μ΄μ΄ λꡬ(-27.7%), μμΈ(-26.9%), κ΄μ£Ό(-24.9%), λΆμ°(-23.5%), μΈμ’
(-23.4%), λμ (-21%), κ²½κΈ°(-19.2%), μΈμ²(-18.5%) μμΌλ‘ κ°μνλ€. μ§λ°© λμμ κ²½μ°λ λΉμ·νλ€. κ²½λ¨μ 3λΆκΈ° μμμ
μλμ 1λ§2800μμΌλ‘ 1λΆκΈ° 1λ§7400μ λλΉ 26.4% κ°μνμΌλ©° μ μ£Ό(-25.1%), κ²½λΆ(-24.1%), μΆ©λ¨(-20.9%), κ°μ(-20.9%), μ λ¨(-20.1%), μ λΆ(-17%), μΆ©λΆ(-15.3%) λ±λ κ°μμΈλ₯Ό 보μλ€. μ‘°νν μκ°μ 보μ°κ΅¬μ μ°κ΅¬μμ "μ¬ν΄ λ΄μ κ²½κΈ°μ 침체λ λΆμκΈ°κ° μ μ§λλ©° μκ°, μ€νΌμ€ λ±μ λΉλ‘―ν μμ΅ν λΆλμ° μμ₯μ λΆμκΈ°λ κ²½μ§λ λͺ¨μ΅μ 보μκ³ μ€νΌμ€ν
, μ§μμ°μ
μΌν° λ±μ μμ΅ν λΆλμ° κ³΅κΈλ μ¦κ°ν΄ 곡μ€μ μνλ λμλ€"λ©° "μ€μ μ¬ 3λΆκΈ° μ κ΅ μ€λν μκ° κ³΅μ€λ₯ μ 11.5%λ₯Ό κΈ°λ‘νλ©° 1λΆκΈ° 11.3% λλΉ 0.2% ν¬μΈνΈ μ¦κ°νλ€"κ³ λ§νλ€. κ·Έλ "μ΅κ·Ό μμ
컀머μ€(SNSλ₯Ό ν΅ν μ μμκ±°λ), μμ λ°°λ¬ μ€κ° μ ν리μΌμ΄μ
, μ€κ³ λ¬Όν κ±°λ μ ν리μΌμ΄μ
λ±μ μ¬μ© μ¦κ°λ‘ μ€νλΌμΈ 맀μ₯μ μν₯μ λ―Έμ³€λ€"λ©° "ν₯ν μ§μ, μ½ν
μΈ μ λ°λ₯Έ μκΆ μκ·Ήν νμμ μ¬νλ κ²μΌλ‘ 보μΈλ€"κ³ λ§λΆμλ€.'
input_ids = tokenizer.encode(input_text, return_tensors="pt")
# Generate Summary Text Ids
summary_text_ids = model.generate(
input_ids=input_ids,
bos_token_id=model.config.bos_token_id,
eos_token_id=model.config.eos_token_id,
length_penalty=2.0,
max_length=142,
min_length=56,
num_beams=4,
)
# Decoding Text
print(tokenizer.decode(summary_text_ids[0], skip_special_tokens=True))
API and Demo
You can experience this model through ainize-api and ainize-demo.
- Downloads last month
- 453
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.