prithivMLmods commited on
Commit
db0c74f
·
verified ·
1 Parent(s): aba0a29

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -11,7 +11,7 @@ tags:
11
  ---
12
  # **QwQ-R1-Distill-7B-CoT**
13
 
14
- QwQ-R1-Distill-7B-CoT is based on the LLaMA model, which was distilled by DeepSeek-R1-Distill-Qwen-7B. It has been fine-tuned on the long chain-of-thought reasoning model and specialized datasets, focusing on chain-of-thought (CoT) reasoning for problem-solving. This model is optimized for tasks requiring logical reasoning, detailed explanations, and multi-step problem-solving, making it ideal for applications such as instruction-following, text generation, and complex reasoning tasks.
15
 
16
  # **Quickstart with Transformers**
17
 
 
11
  ---
12
  # **QwQ-R1-Distill-7B-CoT**
13
 
14
+ QwQ-R1-Distill-7B-CoT is based on the *Qwen [ KT ] model*, which was distilled by DeepSeek-R1-Distill-Qwen-7B. It has been fine-tuned on the long chain-of-thought reasoning model and specialized datasets, focusing on chain-of-thought (CoT) reasoning for problem-solving. This model is optimized for tasks requiring logical reasoning, detailed explanations, and multi-step problem-solving, making it ideal for applications such as instruction-following, text generation, and complex reasoning tasks.
15
 
16
  # **Quickstart with Transformers**
17