Update README.md
Browse files
README.md
CHANGED
@@ -8,15 +8,20 @@ base_model:
|
|
8 |
|
9 |
## Model Card for gemma-2-2b-jpn-it-translate-gguf
|
10 |
|
11 |
-
gemma-2-2b-jpn-it-translate-ggufは、日英・英日翻訳タスクに特化したSLM(Small Language Model)です。パラメーター数は20億(2B
|
12 |
-
gemma-2-2b-jpn-it-translate-gguf is an SLM (Small Language Model) specialized for Japanese-English and English-Japanese translation tasks. Despite having only 2 billion parameters (2B), it provides translation quality approaching that of conventional 7 billion (7B) parameter models. With a relatively small file size of about
|
13 |
|
|
|
14 |
|
15 |
-
|
16 |
-
|
|
|
17 |
|
18 |
### sample for windows
|
19 |
|
|
|
|
|
|
|
20 |
start server.
|
21 |
```
|
22 |
.\llama.cpp\build\bin\Release\llama-server -m .\gemma-2-2b-jpn-it-translate-Q4_K_L.gguf -c 2048 --override-kv tokenizer.ggml.add_bos_token=bool:false
|
|
|
8 |
|
9 |
## Model Card for gemma-2-2b-jpn-it-translate-gguf
|
10 |
|
11 |
+
gemma-2-2b-jpn-it-translate-ggufは、日英・英日翻訳タスクに特化したSLM(Small Language Model)です。パラメーター数は20億(2B)ですが、分野によっては従来の70億(7B)モデルに迫るレベルの翻訳品質を提供します。ファイルサイズが約2GB程度であるため比較的小さいため、高速な実行が可能です。
|
12 |
+
gemma-2-2b-jpn-it-translate-gguf is an SLM (Small Language Model) specialized for Japanese-English and English-Japanese translation tasks. Despite having only 2 billion parameters (2B), it provides translation quality approaching that of conventional 7 billion (7B) parameter models in some kind of text. With a relatively small file size of about 2GB, it enables fast execution.
|
13 |
|
14 |
+
### Sample Colab Script
|
15 |
|
16 |
+
Google アカウントをお持ちの方は以下のリンク先でOpen in Colabボタンを押す事で試す事ができます
|
17 |
+
If you have a Google account, you can try it out by clicking the Open in Colab button at the link below.
|
18 |
+
[Colab sample](https://github.com/webbigdata-jp/python_sample/blob/main/gemma_2_2b_jpn_it_translate_gguf_Free_Colab_sample.ipynb)
|
19 |
|
20 |
### sample for windows
|
21 |
|
22 |
+
クライアント/サーバー形態で動作させるサンプルは以下です。
|
23 |
+
Below is a sample of how it works in client/server mode.
|
24 |
+
|
25 |
start server.
|
26 |
```
|
27 |
.\llama.cpp\build\bin\Release\llama-server -m .\gemma-2-2b-jpn-it-translate-Q4_K_L.gguf -c 2048 --override-kv tokenizer.ggml.add_bos_token=bool:false
|