munish0838 commited on
Commit
9c9a3c0
·
verified ·
1 Parent(s): df2e2ab

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +80 -0
README.md ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ license: creativeml-openrail-m
5
+ datasets:
6
+ - prithivMLmods/Math-IIO-68K-Mini
7
+ language:
8
+ - en
9
+ base_model:
10
+ - Qwen/Qwen2.5-7B-Instruct
11
+ pipeline_tag: text-generation
12
+ library_name: transformers
13
+ tags:
14
+ - safetensors
15
+ - qwen2.5
16
+ - 7B
17
+ - Instruct
18
+ - Math
19
+ - CoT
20
+ - one-shot
21
+
22
+ ---
23
+
24
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
25
+
26
+
27
+ # QuantFactory/Math-IIO-7B-Instruct-GGUF
28
+ This is quantized version of [prithivMLmods/Math-IIO-7B-Instruct](https://huggingface.co/prithivMLmods/Math-IIO-7B-Instruct) created using llama.cpp
29
+
30
+ # Original Model Card
31
+
32
+ ![aaa.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/faLfR-doaWP_BLUkOQrbq.png)
33
+
34
+ ### **Math IIO 7B Instruct**
35
+
36
+ The **Math IIO 7B Instruct** is a fine-tuned language model based on the robust **Qwen2.5-7B-Instruct** architecture. This model has been specifically trained to excel in single-shot mathematical reasoning and instruction-based tasks, making it a reliable choice for educational, analytical, and problem-solving applications.
37
+
38
+ ### **Key Features:**
39
+
40
+ 1. **Math-Optimized Capabilities:**
41
+ The model is designed to handle complex mathematical problems, step-by-step calculations, and reasoning tasks.
42
+
43
+ 2. **Instruction-Tuned:**
44
+ Fine-tuned for better adherence to structured queries and task-oriented prompts, enabling clear and concise outputs.
45
+
46
+ 3. **Large Vocabulary:**
47
+ Equipped with an extensive tokenizer configuration and custom tokens to ensure precise mathematical notation support.
48
+
49
+ | File Name | Size | Description | Upload Status |
50
+ |------------------------------------|------------|-----------------------------------------------|----------------|
51
+ | `.gitattributes` | 1.57 kB | Git attributes configuration file | Uploaded |
52
+ | `README.md` | 263 Bytes | README file with minimal details | Updated |
53
+ | `added_tokens.json` | 657 Bytes | Custom added tokens for tokenizer | Uploaded |
54
+ | `config.json` | 861 Bytes | Model configuration file | Uploaded |
55
+ | `generation_config.json` | 281 Bytes | Configuration for text generation settings | Uploaded |
56
+ | `merges.txt` | 1.82 MB | Merge rules for byte pair encoding tokenizer | Uploaded |
57
+ | `pytorch_model-00001-of-00004.bin` | 4.88 GB | First part of model weights (PyTorch) | Uploaded (LFS) |
58
+ | `pytorch_model-00002-of-00004.bin` | 4.93 GB | Second part of model weights (PyTorch) | Uploaded (LFS) |
59
+ | `pytorch_model-00003-of-00004.bin` | 4.33 GB | Third part of model weights (PyTorch) | Uploaded (LFS) |
60
+ | `pytorch_model-00004-of-00004.bin` | 1.09 GB | Fourth part of model weights (PyTorch) | Uploaded (LFS) |
61
+ | `pytorch_model.bin.index.json` | 28.1 kB | Index JSON file for model weights | Uploaded |
62
+ | `special_tokens_map.json` | 644 Bytes | Map of special tokens used by the tokenizer | Uploaded |
63
+ | `tokenizer.json` | 11.4 MB | Tokenizer settings and vocab | Uploaded (LFS) |
64
+ | `tokenizer_config.json` | 7.73 kB | Configuration for tokenizer | Uploaded |
65
+ | `vocab.json` | 2.78 MB | Vocabulary for tokenizer | Uploaded |
66
+
67
+ ### **Training Details:**
68
+ - **Base Model:** [Qwen/Qwen2.5-7B-Instruct](#)
69
+ - **Dataset:** Trained on **Math-IIO-68K-Mini**, a curated dataset with 68.8k high-quality examples focusing on mathematical instructions, equations, and logic-based queries.
70
+
71
+ ### **Capabilities:**
72
+ - **Problem-Solving:** Solves mathematical problems ranging from basic arithmetic to advanced calculus and linear algebra.
73
+ - **Educational Use:** Explains solutions step-by-step, making it a valuable teaching assistant.
74
+ - **Analysis & Reasoning:** Handles logical reasoning tasks and computational queries effectively.
75
+
76
+ ### **How to Use:**
77
+ 1. Download all model files, ensuring the PyTorch weights and tokenizer configurations are included.
78
+ 2. Load the model in your Python environment using frameworks like PyTorch or Hugging Face Transformers.
79
+ 3. Use the provided configurations (`config.json` and `generation_config.json`) for optimal inference.
80
+