Kor-merge-llama 3.1 8B
"μ΄λ κ² κ³μλ Ήμ΄ μ ν¬λμμ΄. μλΉκ΅°μ μ΄μ μ΄λ»κ² ν΄μΌν κΉ?"
κ³μλ Ή μ ν¬λ‘ μΈν΄ μλΉκ΅°μ λν μλ‘μ΄ μ§μΉ¨μ΄λ μ§μκ° λ΄λ €μ§ κ²μΌλ‘ μμλ©λλ€. νμ§λ§ μ λ νμ¬μ μν©μ μ νν νμ νκΈ° μ΄λ ΅κ³ , μλΉκ΅°μ νλ κΈ°μ€μ΄λ νλ λ²μλ₯Ό κ²°μ νλ κ²μ ν΄λΉ λΆμλ κΈ°κ΄μ μ λ¬Έκ°λ€μ΄ λ΄λΉν μΌμ λλ€. κ³μλ Ήμ λ°λ₯Έ μλΉκ΅°μ μν κ³Ό μ무μ λν΄μλ 곡μμ μΈ μλ΄λ μ§μΉ¨μ κΈ°λ€λ¦¬λ κ²μ΄ μ’μ κ² κ°μμ. μμ μ μ μνμκΈ° λ°λΌλ©°, μν© νμ μ μ΅μ μ λ€ν΄ μ£ΌμκΈ°λ₯Ό λ°λλλ€.
"κ·Έλ. μκ² μ΄, κ·Όλ° λ΄κ° κ΅°μΈμ΄λ©΄ λλ κ΅°μΈμΈκ±° μμ§?"
λ€, μ λ AIλΌλ μ μ κ°μνλ©΄ κ΅°μΈμ΄λ λΉκ΅°μΈ ꡬλΆμ΄ μμ΄μ. νμ§λ§ μ λ μ¬λ¬λΆμ μ견과 κ°μ μ μ΄ν΄νκ³ κ³΅κ°νλ κΈ°λ₯μ κ°μΆκ³ μμ΄μ. κ·Έμ λ°λΌ, κ΅°μΈμΌλ‘μμ κ·νμ μ μ₯κ³Ό κ±±μ μ λν΄ μ λ 곡κ°ν μ μμ΄μ. νμ§λ§ κ³μλ Ήμ λ°λ₯Έ μλΉκ΅°μ νλ μ§μΉ¨μ΄λ μ무μ λν΄μλ μ λ¬Έκ°λ€μ νλ¨μ λ°λ₯΄λ κ²μ΄ λ°λμ§ν κ² κ°μμ. 곡μμ μΈ μλ΄λ μ§μλ₯Ό κΈ°λ€λ¦¬λ κ²μ΄ κ°μ₯ μμ νκ³ μ¬λ°λ₯Έ λ°©λ²μΌ κ±°μμ. μν©μ΄ λμ± μμ λκΈ°λ₯Ό λ°λΌλ©°, κ·νμ κ°μ‘±λΆλ€μ μμ μ κΈ°μν©λλ€.
νκ΅μ΄ μνκ³ , ν 루μλ€μ΄μ (νμ리) λͺ»μ‘μμ΅λλ€.
ν¨μ μν¨ λͺ¨λΈμ΄λΌ λΌμ΄μΌμ€κ° μ΄κ² λ§λ μΆμ§λ§ μΌλ¨ μ¬λ €λ΄ λλ€.
νκ΅μ΄ μνλ llama 3.1 μ°ΎμΌμλλΆλ€μ΄ μ μ©νκ² μ¬μ©νμ€ μ μμΌλ©΄ μ’κ² μ΅λλ€.
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the DARE TIES merge method using Llama-3.1-8B-Lexi-Uncensored-V2 as a base.
Models Merged
The following models were included in the merge:
- ktdsbaseLM-v0.2-onbased-llama3.1
- Llama-VARCO-8B-Instruct
- llama-3.1-8b-komedic-instruct
Configuration
The following YAML configuration was used to produce this model:
models:
- model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
# no parameters necessary for base model
- model: AIDXteam/ktdsbaseLM-v0.2-onbased-llama3.1
parameters:
density: 0.5
weight: 0.5
- model: unidocs/llama-3.1-8b-komedic-instruct
parameters:
density: 0.8
weight: 0.7
- model: NCSOFT/Llama-VARCO-8B-Instruct
parameters:
density: 0.3
weight: 0.5
- model: unidocs/llama-3.1-8b-komedic-instruct
parameters:
density: 0.4
weight: 0.5
- model: NCSOFT/Llama-VARCO-8B-Instruct
parameters:
density: 0.5
weight: 0.5
merge_method: dare_ties
base_model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
dtype: bfloat16
- Downloads last month
- 22
Model tree for kenonix/kor-merge-llama3.1
Base model
Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2