prithivMLmods
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -14,7 +14,7 @@ tags:
|
|
14 |
- text-generation-inference
|
15 |
- coco
|
16 |
---
|
17 |
-
# **COCO-7B-Instruct [chain of continuesness]**
|
18 |
|
19 |
COCO-7B-Instruct `[ chain of continuesness ]` is based on a 7B-parameter architecture, optimized for instruction-following tasks and advanced reasoning capabilities. Fine-tuned on a diverse set of datasets and leveraging chain-of-thought (CoT) reasoning, it excels in understanding contexts, solving mathematical problems, and generating detailed, structured responses. Its lightweight architecture ensures efficiency while maintaining performance, making it suitable for applications requiring logical reasoning, concise explanations, and multi-step problem-solving.
|
20 |
|
@@ -33,7 +33,7 @@ Below is a code snippet demonstrating how to load the tokenizer and model for co
|
|
33 |
```python
|
34 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
35 |
|
36 |
-
model_name = "prithivMLmods/COCO-7B-Instruct"
|
37 |
|
38 |
model = AutoModelForCausalLM.from_pretrained(
|
39 |
model_name,
|
|
|
14 |
- text-generation-inference
|
15 |
- coco
|
16 |
---
|
17 |
+
# **COCO-7B-Instruct 1M [chain of continuesness]**
|
18 |
|
19 |
COCO-7B-Instruct `[ chain of continuesness ]` is based on a 7B-parameter architecture, optimized for instruction-following tasks and advanced reasoning capabilities. Fine-tuned on a diverse set of datasets and leveraging chain-of-thought (CoT) reasoning, it excels in understanding contexts, solving mathematical problems, and generating detailed, structured responses. Its lightweight architecture ensures efficiency while maintaining performance, making it suitable for applications requiring logical reasoning, concise explanations, and multi-step problem-solving.
|
20 |
|
|
|
33 |
```python
|
34 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
35 |
|
36 |
+
model_name = "prithivMLmods/COCO-7B-Instruct-1M"
|
37 |
|
38 |
model = AutoModelForCausalLM.from_pretrained(
|
39 |
model_name,
|