Update README.md
Browse files
README.md
CHANGED
@@ -1,202 +1,129 @@
|
|
1 |
-
---
|
2 |
-
base_model:
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
- **
|
21 |
-
- **
|
22 |
-
- **
|
23 |
-
- **
|
24 |
-
- **
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
-
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
|
102 |
-
|
103 |
-
|
104 |
-
|
105 |
-
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
|
112 |
-
|
113 |
-
|
114 |
-
|
115 |
-
|
116 |
-
|
117 |
-
|
118 |
-
|
119 |
-
|
120 |
-
|
121 |
-
|
122 |
-
|
123 |
-
|
124 |
-
|
125 |
-
[
|
126 |
-
|
127 |
-
|
128 |
-
|
129 |
-
|
130 |
-
|
131 |
-
#### Summary
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
## Model Examination [optional]
|
136 |
-
|
137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
-
|
141 |
-
## Environmental Impact
|
142 |
-
|
143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
144 |
-
|
145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
-
|
147 |
-
- **Hardware Type:** [More Information Needed]
|
148 |
-
- **Hours used:** [More Information Needed]
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
-
|
153 |
-
## Technical Specifications [optional]
|
154 |
-
|
155 |
-
### Model Architecture and Objective
|
156 |
-
|
157 |
-
[More Information Needed]
|
158 |
-
|
159 |
-
### Compute Infrastructure
|
160 |
-
|
161 |
-
[More Information Needed]
|
162 |
-
|
163 |
-
#### Hardware
|
164 |
-
|
165 |
-
[More Information Needed]
|
166 |
-
|
167 |
-
#### Software
|
168 |
-
|
169 |
-
[More Information Needed]
|
170 |
-
|
171 |
-
## Citation [optional]
|
172 |
-
|
173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
-
|
175 |
-
**BibTeX:**
|
176 |
-
|
177 |
-
[More Information Needed]
|
178 |
-
|
179 |
-
**APA:**
|
180 |
-
|
181 |
-
[More Information Needed]
|
182 |
-
|
183 |
-
## Glossary [optional]
|
184 |
-
|
185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
186 |
-
|
187 |
-
[More Information Needed]
|
188 |
-
|
189 |
-
## More Information [optional]
|
190 |
-
|
191 |
-
[More Information Needed]
|
192 |
-
|
193 |
-
## Model Card Authors [optional]
|
194 |
-
|
195 |
-
[More Information Needed]
|
196 |
-
|
197 |
-
## Model Card Contact
|
198 |
-
|
199 |
-
[More Information Needed]
|
200 |
-
### Framework versions
|
201 |
-
|
202 |
-
- PEFT 0.13.2
|
|
|
1 |
+
---
|
2 |
+
base_model: meta-llama/Llama-3.1-8B-Instruct
|
3 |
+
tags:
|
4 |
+
- text-generation-inference
|
5 |
+
- transformers
|
6 |
+
- unsloth
|
7 |
+
- llama
|
8 |
+
- trl
|
9 |
+
license: apache-2.0
|
10 |
+
language:
|
11 |
+
- lb
|
12 |
+
- en
|
13 |
+
---
|
14 |
+
# Lux-Llama
|
15 |
+
|
16 |
+
This repository contains a fine-tuned version of the Llama-3.1-8B-Instruct model, specifically adapted for Luxembourgish. The fine-tuning was performed using LoRA (Low-Rank Adaptation) on a dataset crafted to generate Chain-of-Thought (CoT) reasoning in Luxembourgish. The fine-tuning process utilized the computational resources provided by [Meluxina](https://www.luxprovide.lu/meluxina), a high-performance computing (HPC) platform operated by LuxProvide.
|
17 |
+
|
18 |
+
## Model Overview
|
19 |
+
- **Base Model:** Llama-3.1-8B-Instruct
|
20 |
+
- **Fine-Tuning Method:** LoRA (Low-Rank Adaptation)
|
21 |
+
- **Dataset:** Luxembourgish Chain-of-Thought (CoT) dataset
|
22 |
+
- **Compute Platform:** Meluxina by LuxProvide
|
23 |
+
- **Fine-Tuning Framework:** [Unsloth](https://github.com/unsloth/unsloth)
|
24 |
+
- **Status:** Early release. The model and dataset are still being improved, and feedback is welcome.
|
25 |
+
|
26 |
+
## About Meluxina
|
27 |
+
[Meluxina](https://www.luxprovide.lu/meluxina) is Luxembourg's national supercomputer, launched in June 2021 by LuxProvide. It is built on the EVIDEN BullSequana XH2000 platform and provides:
|
28 |
+
- **18 PetaFlops** of computing power.
|
29 |
+
- **20 PetaBytes** of storage capacity.
|
30 |
+
- A **scalable architecture** integrating simulation, modeling, data analytics, and AI.
|
31 |
+
|
32 |
+
Meluxina was ranked 36th globally and recognized as the greenest supercomputer in the EU within the Top500 ranking. Named after Luxembourg's legend of the mermaid Melusina, it symbolizes digital innovation and employs water-cooling technology for energy efficiency.
|
33 |
+
|
34 |
+
## Features
|
35 |
+
- **Language:** Luxembourgish
|
36 |
+
- **Specialization:** Reasoning for complex problem-solving and step-by-step explanations.
|
37 |
+
- **Efficiency:** LoRA fine-tuning ensures minimal computational overhead while maintaining high performance.
|
38 |
+
|
39 |
+
## Installation
|
40 |
+
To use the fine-tuned model, ensure you have the following dependencies installed:
|
41 |
+
|
42 |
+
```python
|
43 |
+
%%capture
|
44 |
+
!pip install unsloth
|
45 |
+
# Also get the latest nightly Unsloth!
|
46 |
+
!pip uninstall unsloth -y && pip install --upgrade --no-cache-dir --no-deps git+https://github.com/unslothai/unsloth.git
|
47 |
+
```
|
48 |
+
|
49 |
+
You can then load the model as follows:
|
50 |
+
|
51 |
+
```python
|
52 |
+
from unsloth import FastLanguageModel
|
53 |
+
import torch
|
54 |
+
from transformers import TextStreamer
|
55 |
+
|
56 |
+
model, tokenizer = FastLanguageModel.from_pretrained(
|
57 |
+
model_name = "aiplanet/Lux-Llama",
|
58 |
+
max_seq_length = 8192,
|
59 |
+
dtype = None,
|
60 |
+
load_in_4bit = True,
|
61 |
+
)
|
62 |
+
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
|
63 |
+
|
64 |
+
alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
|
65 |
+
### Instruction:
|
66 |
+
{}
|
67 |
+
### Input:
|
68 |
+
{}
|
69 |
+
### Output:
|
70 |
+
{}"""
|
71 |
+
|
72 |
+
inputs = tokenizer(
|
73 |
+
[
|
74 |
+
alpaca_prompt.format(
|
75 |
+
"Proposéiert mir en neit Rezept mat Eeër a Brout", # instruction
|
76 |
+
"", # input
|
77 |
+
"", # output - leave this blank for generation!
|
78 |
+
)
|
79 |
+
], return_tensors = "pt").to("cuda")
|
80 |
+
|
81 |
+
text_streamer = TextStreamer(tokenizer)
|
82 |
+
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 2048)
|
83 |
+
```
|
84 |
+
|
85 |
+
Output:
|
86 |
+
|
87 |
+
```python
|
88 |
+
# Sécher! Hei ass e leckert Rezept dat Eeër a Brout kombinéiert: Brout Eeër Frittata Mat dësem Rezept kënnt Dir e leckere Brout Eeër Frittata maachen, perfekt fir e früh Moien Frühstück oder e leckeren Snack.
|
89 |
+
|
90 |
+
# Zutaten:
|
91 |
+
# - 4 grouss Eeër
|
92 |
+
# - 1/2 Coupe geschnidden Brout
|
93 |
+
# - 1/2 Coupe gerappte Cheddar Kéis
|
94 |
+
# - 1/2 Coupe gerappte Parmesan Kéis
|
95 |
+
# - 1/4 Coupe gerappte Mozzarella Kéis
|
96 |
+
# - 1/4 Coupe gehackte frësche Petersilie
|
97 |
+
# - Salz a Peffer fir ze schmaachen
|
98 |
+
# - 2 Esslöffel Olivenueleg
|
99 |
+
|
100 |
+
# Instruktioune:
|
101 |
+
# 1. Den Ofen op 375 ° F (190 ° C) virhëtzen.
|
102 |
+
# 2. An enger grousser Schossel, d'Eeër, d'Brout, d'Cheddar Kéis, d'Parmesan Kéis, d'Mozzarella Kéis, d'Petersilie, Salz a Peffer mëschen.
|
103 |
+
# 3. Huelt eng 9-Zoll (23 cm) Liewensmëttel Schossel a fëllt se mat der Eeër Mëschung.
|
104 |
+
# 4. Dréckt d'Schossel mat Olivenueleg.
|
105 |
+
# 5. Bake fir ongeféier 35-40 Minutten, oder bis d'Eeër voll gekacht sinn a d'Brout e liicht brong ass.
|
106 |
+
# 6. Huelt de Frittata aus dem Ofen a léisst et e puer Minutten ofkillen ier Dir et servéiert.
|
107 |
+
# Genéisst Är lecker Brout Eeër Frittata!
|
108 |
+
```
|
109 |
+
|
110 |
+
## Fine-Tuning Process
|
111 |
+
1. **Framework:** The fine-tuning was conducted using [Unsloth](https://github.com/unsloth/unsloth), a LoRA-based fine-tuning library.
|
112 |
+
2. **Steps:**
|
113 |
+
- Initialized the Llama-3.1-8B-Instruct model.
|
114 |
+
- Applied LoRA adapters for efficient training.
|
115 |
+
- Fine-tuned using the Luxembourgish CoT dataset on the Meluxina HPC cluster.
|
116 |
+
3. **Hardware:** High-performance A100 GPUs provided by Meluxina ensured rapid convergence.
|
117 |
+
|
118 |
+
## Dataset Description
|
119 |
+
- *Under progress*
|
120 |
+
|
121 |
+
## Benchmarking
|
122 |
+
- *Under progress*
|
123 |
+
|
124 |
+
## Acknowledgments
|
125 |
+
This work leverages computational resources and support from [Meluxina](https://www.luxprovide.lu/meluxina) by LuxProvide.
|
126 |
+
|
127 |
+
<img src="https://www.luxprovide.lu/wp-content/themes/luxprovide2023/public/images/logo/logo_notagline_color_blue.4b07cb.svg" alt="LuxProvide Logo" width="50%">
|
128 |
+
|
129 |
+
<img src="https://docs.lxp.lu/FAQ/images/MeluXina_Logo.png" alt="Meluxina Logo" width="50%">
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|