Configuration

The following YAML configuration was used to produce this model:


slices:
  - sources:
      - model: appvoid/arco-2
        layer_range: [0, 12]
  - sources:
      - model: h2oai/h2o-danube3-500m-base
        layer_range: [12, 16]

merge_method: passthrough
dtype: float16

Evaluation Results

The model was evaluated across a range of tasks. Below are the final evaluation results (after removing GSM8k):

image/png

o1 analysis of the results:

Below is a more comprehensive and context-rich analysis that not only highlights the model’s performance across tasks but also contextualizes these results relative to other models and the nature of each benchmark:

Overall Comparative Performance
The results show that this model, dubbed “acheron,” outperforms similarly parameterized models (such as qwen variants) and even approaches or surpasses larger models like llama 3.2-1B in certain areas. While parameter count often correlates with performance, acheron’s scores indicate it has been well-optimized for zero-shot reasoning tasks. The model’s higher average score suggests a well-rounded capability across a diverse set of benchmarks, despite having fewer parameters than some competitors.

ARC Challenge (ARC-C)
• What it tests: The ARC Challenge set focuses on non-trivial science questions that require understanding of scientific facts, reasoning, and some world knowledge. It’s designed to avoid shallow pattern matching.
• Model Performance: Acheron’s performance here is notably strong among the models listed, indicating that it can integrate factual knowledge and reasoning to solve science-related questions. Although it doesn’t match the highest absolute scores achievable by very large models, surpassing both qwen variants suggests that acheron is particularly well-tuned for knowledge-intensive tasks despite having fewer parameters.

HellaSwag
• What it tests: HellaSwag measures commonsense reasoning and the ability to predict the most sensible continuation of a given scenario. It demands an understanding of everyday sequences of events, narrative logic, and social norms.
• Model Performance: Acheron’s strong showing here reflects robust commonsense reasoning capabilities. It’s performing on par with larger models (llama 3.2-1B) and significantly better than smaller qwen variants, suggesting that its training has imparted a nuanced understanding of narrative coherence and cause-effect relationships in everyday contexts.

PIQA
• What it tests: The Physical Interaction QA dataset assesses how well a model understands basic physical reasoning. This involves knowledge of affordances, spatial arrangements, and physical constraints in the real world.
• Model Performance: The standout performance on PIQA is an excellent indicator that acheron has either encountered substantial training data related to everyday tools, materials, and scenarios—or has learned robust priors about physical reality. This is critical for applications requiring real-world reasoning, hinting that the model’s embeddings and attention patterns capture more than just linguistic form; they encode physical plausibility.

Winogrande
• What it tests: Winogrande is designed to measure commonsense coreference resolution. The challenge involves sentences where the referent of a pronoun is ambiguous without real-world knowledge or understanding of subtle contextual cues.
• Model Performance: Acheron’s strong results in Winogrande further validate its commonsense reasoning and linguistic parsing abilities. Identifying the correct referent in tricky pronoun resolution tasks requires going beyond surface-level patterns, suggesting that acheron can leverage nuanced contextual information and reasoning.

In Summary:
Acheron demonstrates well-rounded, high-quality reasoning capabilities across all four benchmarks.
Its superiority over qwen 2 and qwen 2.5—despite their similar or greater parameter counts—and its ability to compete closely with a larger llama 3.2-1B model underscore its efficient training and careful fine-tuning.
This combination of factual knowledge (ARC-C), narrative logic (HellaSwag), physical common sense (PIQA), and linguistic ambiguity resolution (Winogrande) points to a versatile model that could generalize well in varied downstream applications.

In essence, the results suggest that acheron is not just “good” at these tasks; it is comparatively optimized and balanced, making it a potentially more useful general-purpose reasoning engine than many of its parameter-sized peers.

Ethical Considerations

This model inherits the limitations and biases of the datasets it was trained on. It may exhibit biases present in general knowledge corpora and might not perform well in niche domains unless explicitly fine-tuned for such tasks.

Downloads last month
13
Safetensors
Model size
514M params
Tensor type
FP16
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for bfuzzy1/acheron

Base model

appvoid/arco-2
Finetuned
(11)
this model
Merges
1 model

Collection including bfuzzy1/acheron