prithivMLmods commited on
Commit
b2181a9
·
verified ·
1 Parent(s): fa0f619

Update FT Base

Browse files
Files changed (1) hide show
  1. README.md +2 -6
README.md CHANGED
@@ -1,13 +1,11 @@
1
-
2
  ---
3
-
4
  license: creativeml-openrail-m
5
  datasets:
6
  - prithivMLmods/Math-IIO-68K-Mini
7
  language:
8
  - en
9
  base_model:
10
- - Qwen/Qwen2.5-7B-Instruct
11
  pipeline_tag: text-generation
12
  library_name: transformers
13
  tags:
@@ -18,7 +16,6 @@ tags:
18
  - Math
19
  - CoT
20
  - one-shot
21
-
22
  ---
23
 
24
  [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
@@ -76,5 +73,4 @@ The **Math IIO 7B Instruct** is a fine-tuned language model based on the robust
76
  ### **How to Use:**
77
  1. Download all model files, ensuring the PyTorch weights and tokenizer configurations are included.
78
  2. Load the model in your Python environment using frameworks like PyTorch or Hugging Face Transformers.
79
- 3. Use the provided configurations (`config.json` and `generation_config.json`) for optimal inference.
80
-
 
 
1
  ---
 
2
  license: creativeml-openrail-m
3
  datasets:
4
  - prithivMLmods/Math-IIO-68K-Mini
5
  language:
6
  - en
7
  base_model:
8
+ - prithivMLmods/Math-IIO-7B-Instruct
9
  pipeline_tag: text-generation
10
  library_name: transformers
11
  tags:
 
16
  - Math
17
  - CoT
18
  - one-shot
 
19
  ---
20
 
21
  [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
 
73
  ### **How to Use:**
74
  1. Download all model files, ensuring the PyTorch weights and tokenizer configurations are included.
75
  2. Load the model in your Python environment using frameworks like PyTorch or Hugging Face Transformers.
76
+ 3. Use the provided configurations (`config.json` and `generation_config.json`) for optimal inference.