prithivMLmods commited on
Commit
81ab1a8
·
verified ·
1 Parent(s): 490eee9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +76 -1
README.md CHANGED
@@ -9,4 +9,79 @@ library_name: transformers
9
  tags:
10
  - qwq
11
  - reasoning
12
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  tags:
10
  - qwq
11
  - reasoning
12
+ ---
13
+ # **QwQ-Math-IO-500M [ Qwen Base ]**
14
+
15
+ QwQ-Math-IO-500M is a fine-tuned variant of Qwen2.5-0.5B, specifically optimized for mathematical problem-solving, input-output reasoning, and text generation tasks. This model contains 494 million parameters and uses FP16 tensor type for efficient inference. It leverages the robust architecture of Qwen2.5 and has undergone further enhancements to excel in structured reasoning, complex mathematical operations, and multilingual support.
16
+
17
+ ---
18
+
19
+ ## **Key Features**
20
+
21
+ 1. **Base Model**: Derived from [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B).
22
+ 2. **Finetuned on Instruction and Math Data**: Built upon Qwen2.5-0.5B-Instruct with specialized datasets for better instruction-following and mathematical reasoning.
23
+ 3. **Specialization**:
24
+ - Advanced mathematical problem-solving and reasoning.
25
+ - Enhanced input-output tasks for structured outputs (JSON, tables).
26
+ - Support for long-form content generation.
27
+ - Multilingual capabilities (over 29 languages).
28
+ 4. **Optimized for Long Context**: Supports input contexts up to 128K tokens with generation capability up to 8K tokens.
29
+
30
+ ---
31
+
32
+ ## **Datasets Used**
33
+
34
+ The model was fine-tuned on high-quality datasets explicitly curated for Chain of Thought (CoT) reasoning, mathematical problem-solving, and long-context tasks. Notable datasets include:
35
+
36
+ 1. **[amphora/QwQ-LongCoT-130K](https://huggingface.co/datasets/amphora/QwQ-LongCoT-130K)**: 133k samples focused on complex CoT reasoning.
37
+ 2. **[qingy2024/QwQ-LongCoT-Verified-130K](https://huggingface.co/datasets/qingy2024/QwQ-LongCoT-Verified-130K)**: 467k verified samples emphasizing detailed step-by-step reasoning.
38
+ 3. **[gghfez/QwQ-LongCoT-130K-cleaned](https://huggingface.co/datasets/gghfez/QwQ-LongCoT-130K-cleaned)**: 125k cleaned samples for high-accuracy reasoning tasks.
39
+
40
+ ---
41
+
42
+ ## **Running the Model**
43
+
44
+ To run the model using the Transformers library:
45
+
46
+ ```python
47
+ # Install necessary libraries
48
+ # pip install transformers torch
49
+
50
+ from transformers import AutoTokenizer, AutoModelForCausalLM
51
+ import torch
52
+
53
+ tokenizer = AutoTokenizer.from_pretrained("prithivMLmods/QwQ-Math-IO-500M")
54
+ model = AutoModelForCausalLM.from_pretrained(
55
+ "prithivMLmods/QwQ-Math-IO-500M",
56
+ torch_dtype=torch.float16,
57
+ device_map="auto",
58
+ )
59
+
60
+ input_text = "Solve the equation: 2x + 5 = 15."
61
+ input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
62
+
63
+ outputs = model.generate(**input_ids, max_new_tokens=100)
64
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
65
+ ```
66
+
67
+ ---
68
+
69
+ ## **Limitations**
70
+
71
+ 1. **Bias and Fairness**: Despite fine-tuning efforts, biases from the training data may persist. Users should critically assess model outputs.
72
+ 2. **Contextual Understanding**: While optimized for long contexts, the model may still occasionally misinterpret highly ambiguous prompts.
73
+ 3. **Mathematical Accuracy**: Although fine-tuned for math tasks, complex or highly specialized problems may require verification.
74
+ 4. **Real-Time Knowledge**: The model's knowledge is limited to its training data and does not include real-time or post-training updates.
75
+ 5. **Safety Considerations**: Safety alignment has been performed, but users should monitor outputs to avoid inappropriate content.
76
+ 6. **Resource Requirements**: Running the model efficiently requires a GPU with sufficient memory.
77
+
78
+ ---
79
+
80
+ ## **Intended Use Cases**
81
+
82
+ 1. **Mathematical Assistance**: Solving equations, performing calculations, and explaining mathematical concepts.
83
+ 2. **Conversational AI**: Enhanced dialogue capabilities with nuanced understanding and context retention.
84
+ 3. **Educational Assistance**: Generating detailed explanations, tutorials, and step-by-step guides.
85
+ 4. **Content Creation**: Assisting in writing blogs, articles, and creative content.
86
+ 5. **Multilingual Applications**: Supporting content generation and translation across multiple languages.
87
+ 6. **Data Generation**: Producing structured outputs such as JSON and tables for various applications.