File size: 8,250 Bytes
3de86a7 131175f 3de86a7 0e74191 438c724 e19a66f 0303ea1 a8ccc84 0303ea1 b56370e 0303ea1 a8ccc84 0303ea1 e19a66f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 |
---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen2.5-7B-Instruct-1M
pipeline_tag: text-generation
library_name: transformers
tags:
- qwen
- cot
- chain_of_thought
- qwen2.5
- text-generation-inference
- coco
model-index:
- name: COCO-7B-Instruct-1M
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: wis-k/instruction-following-eval
split: train
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 47.43
name: averaged accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FCOCO-7B-Instruct-1M
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: SaylorTwift/bbh
split: test
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 34.68
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FCOCO-7B-Instruct-1M
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: lighteval/MATH-Hard
split: test
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 30.29
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FCOCO-7B-Instruct-1M
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
split: train
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 7.72
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FCOCO-7B-Instruct-1M
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 13.51
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FCOCO-7B-Instruct-1M
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 35.4
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FCOCO-7B-Instruct-1M
name: Open LLM Leaderboard
---
# **COCO-7B-Instruct 1M [chain of continuesness]**
COCO-7B-Instruct `[ chain of continuesness ]` is based on a 7B-parameter architecture, optimized for instruction-following tasks and advanced reasoning capabilities. Fine-tuned on a diverse set of datasets and leveraging chain-of-thought (CoT) reasoning, it excels in understanding contexts, solving mathematical problems, and generating detailed, structured responses. Its lightweight architecture ensures efficiency while maintaining performance, making it suitable for applications requiring logical reasoning, concise explanations, and multi-step problem-solving.
Key improvements include:
1. **Enhanced Instruction Following**: This model is designed to precisely follow complex instructions and generate coherent, concise outputs, even for nuanced or multi-layered prompts.
2. **Optimized Reasoning Capabilities**: Improved reasoning for mathematical problem-solving, logical deduction, and critical thinking, supported by CoT methodologies.
3. **Lightweight Efficiency**: With only 7B parameters, it requires fewer computational resources than larger models while maintaining competitive performance.
4. **Context and Structure**: Exceptional at handling structured data inputs like tables and JSON, generating well-organized outputs ideal for practical applications.
5. **Extended Content Generation**: Generates up to 4K tokens in a single response, allowing for efficient long-form content generation.
6. **Multilingual Proficiency**: Supports 15 languages, including English, Spanish, French, German, Italian, Portuguese, and others, enabling global accessibility.
# **Quickstart with transformers**
Below is a code snippet demonstrating how to load the tokenizer and model for content generation:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/COCO-7B-Instruct-1M"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "What are the key features of large language models?"
messages = [
{"role": "system", "content": "You are COCO, a helpful and concise assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=256
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
# **Intended Use**
1. **Instruction-Following**:
Ideal for tasks requiring detailed responses, precise reasoning, and clear communication in an instruction-following format.
2. **Reasoning and Problem-Solving**:
Capable of handling logical reasoning, context understanding, and multi-step problem-solving tasks with accuracy.
3. **Code Generation**:
Suitable for coding tasks such as writing, debugging, and optimizing code in popular programming languages.
4. **Data Analysis**:
Specialized in processing structured data (tables, JSON) and generating structured outputs for workflows.
5. **Multilingual Accessibility**:
Enables global use cases, such as content creation, translation, and multilingual chatbots.
6. **Content Generation**:
Designed to generate informative and long-form content such as reports, articles, or guides in a clear, concise format.
# **Limitations**
1. **Hardware Efficiency**:
While optimized for smaller resources compared to larger models, it still benefits from high-memory GPUs for faster inference.
2. **Limited Multilingual Depth**:
Though proficient in 15 languages, performance may vary for less-resourced languages.
3. **Creative Writing Challenges**:
May produce inconsistent results in highly subjective or creative tasks, such as storytelling.
4. **Prompt Dependency**:
Like most models, its performance depends heavily on the quality of the input prompt.
5. **Long-Context Constraints**:
Supports up to 4K tokens in output, which is shorter compared to larger models.
6. **Training Cutoff Awareness**:
Does not have real-time knowledge of events beyond its training data, which may limit its response to recent information.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/prithivMLmods__COCO-7B-Instruct-1M-details)!
Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=prithivMLmods%2FCOCO-7B-Instruct-1M&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)!
| Metric |Value (%)|
|-------------------|--------:|
|**Average** | 28.17|
|IFEval (0-Shot) | 47.43|
|BBH (3-Shot) | 34.68|
|MATH Lvl 5 (4-Shot)| 30.29|
|GPQA (0-shot) | 7.72|
|MuSR (0-shot) | 13.51|
|MMLU-PRO (5-shot) | 35.40|
|