File size: 4,581 Bytes
b7f9cf8
 
8f0d789
 
 
 
b7f9cf8
 
8f0d789
b7f9cf8
 
 
 
 
8f0d789
b7f9cf8
8f0d789
 
 
 
 
b7f9cf8
 
 
c46f489
b7f9cf8
c46f489
b7f9cf8
c46f489
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8f0d789
 
 
b7f9cf8
c46f489
 
 
8f0d789
b7f9cf8
8f0d789
 
 
 
b7f9cf8
8f0d789
b7f9cf8
8f0d789
 
 
b7f9cf8
 
 
8f0d789
b7f9cf8
 
 
 
 
8f0d789
b7f9cf8
8f0d789
b7f9cf8
8f0d789
 
 
b7f9cf8
8f0d789
b7f9cf8
8f0d789
b7f9cf8
8f0d789
 
b7f9cf8
8f0d789
b7f9cf8
 
8f0d789
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b7f9cf8
8f0d789
b7f9cf8
8f0d789
 
 
b7f9cf8
8f0d789
b7f9cf8
8f0d789
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
---
library_name: transformers
tags:
- llm-jp
- japanese
- instruction-tuning
---

# Model Card for yuhkis/llm-jp-3-13b-finetune

## Model Details

### Model Description

This is a LoRA-tuned version of LLM-jp-3-13b, fine-tuned on the Ichikara Instruction dataset.

- **Developed by:** Yuhki Shiraishi
- **Model type:** Instruction-tuned Japanese Language Model
- **Language:** Japanese
- **License:** CC-BY-NC-SA
- **Finetuned from model:** llm-jp/llm-jp-3-13b

## Uses

### Output Generation and Format

#### Implementation Details

To generate output in the required JSONL format:

```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
from peft import PeftModel
import torch
from tqdm import tqdm
import json

# Load model and tokenizer
model_id = "yuhkis/llm-jp-3-13b-finetune"
bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.bfloat16,
)

model = AutoModelForCausalLM.from_pretrained(
    model_id,
    quantization_config=bnb_config,
    device_map="auto",
    token=HF_TOKEN
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True, token=HF_TOKEN)

# Generate outputs
results = []
for data in tqdm(datasets):
    input = data["input"]
    prompt = f"""### 指示
    {input}
    ### 回答
    """
    
    tokenized_input = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt").to(model.device)
    attention_mask = torch.ones_like(tokenized_input)
    
    with torch.no_grad():
        outputs = model.generate(
            tokenized_input,
            attention_mask=attention_mask,
            max_new_tokens=100,
            do_sample=False,
            repetition_penalty=1.2,
            pad_token_id=tokenizer.eos_token_id
        )[0]
    output = tokenizer.decode(outputs[tokenized_input.size(1):], skip_special_tokens=True)
    
    results.append({"task_id": data["task_id"], "output": output})

# Save results to JSONL file
with open("results.jsonl", 'w', encoding='utf-8') as f:
    for result in results:
        json.dump(result, f, ensure_ascii=False)
        f.write('\n')
```

#### Output Format Specification

Required fields in the JSONL output:
- task_id: Task identifier (integer)
- output: Generated response (string)

Example output format:
```json
{"task_id": 0, "output": "応答テキスト"}
```

Note: While additional fields (e.g., input, eval_aspect) may be included, only task_id and output are required for submission.
```

### Out-of-Scope Use

This model should not be used for:
- Commercial applications due to license restrictions
- Critical decision-making without human oversight
- Applications requiring strict reliability guarantees

## Bias, Risks, and Limitations

- The model inherits biases from its training data
- Output quality may vary depending on input complexity
- The model should not be used for making critical decisions without human oversight

### Recommendations

Users should be aware of the model's limitations and verify outputs when used in applications.

## Training Details

### Training Data

- Dataset: Ichikara Instruction Dataset

### Training Procedure 

- **Training regime:** bf16 mixed precision
- **Library:** 🤗 Transformers
- **Optimization:** LoRA (Low-Rank Adaptation)

## Technical Specifications

### Model Architecture

- Base model: LLM-jp-3-13b
- Adaptation method: LoRA

## Citation

**BibTeX:**
```bibtex
@misc{shiraishi2024llm,
    title={LLM-jp-3-13b-finetune: Instruction-tuned Japanese Language Model},
    author={Yuhki Shiraishi},
    year={2024},
    publisher={Hugging Face},
    howpublished={\url{https://huggingface.co/yuhkis/llm-jp-3-13b-finetune}}
}
```

**Base Model Citation:**
```bibtex
@misc{llm-jp2024,
    title={LLM-jp-3: Large Language Model for Japanese},
    author={LLM-jp Project Team},
    year={2024},
    publisher={Hugging Face},
    howpublished={\url{https://huggingface.co/llm-jp/llm-jp-3-13b}}
}
```

**Training Data Citation:**
```
関根聡, 安藤まや, 後藤美知子, 鈴木久美, 河原大輔, 井之上直也, 乾健太郎. 
ichikara-instruction: LLMのための日本語インストラクションデータの構築. 
言語処理学会第30回年次大会(2024)
```

## Model Card Contact

**Primary Contact:**
- Name: Yuhki Shiraishi
- GitHub: [@yuhkis](https://github.com/yuhkis)

For questions regarding this model, please open an issue in the GitHub repository or contact via HuggingFace discussion forum.

Please include "LLM-jp-3-13b-finetune" in the subject line of any correspondence.