---
license: cc-by-nc-4.0
tags:
- moe
- merge
pipeline_tag: text-generation
model-index:
- name: Proctora
  results:
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: AI2 Reasoning Challenge (25-Shot)
      type: ai2_arc
      config: ARC-Challenge
      split: test
      args:
        num_few_shot: 25
    metrics:
    - type: acc_norm
      value: 67.83
      name: normalized accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Karko/Proctora
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: HellaSwag (10-Shot)
      type: hellaswag
      split: validation
      args:
        num_few_shot: 10
    metrics:
    - type: acc_norm
      value: 86.68
      name: normalized accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Karko/Proctora
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MMLU (5-Shot)
      type: cais/mmlu
      config: all
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 65.49
      name: accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Karko/Proctora
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: TruthfulQA (0-shot)
      type: truthful_qa
      config: multiple_choice
      split: validation
      args:
        num_few_shot: 0
    metrics:
    - type: mc2
      value: 59.55
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Karko/Proctora
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: Winogrande (5-shot)
      type: winogrande
      config: winogrande_xl
      split: validation
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 79.79
      name: accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Karko/Proctora
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: GSM8k (5-shot)
      type: gsm8k
      config: main
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 71.95
      name: accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Karko/Proctora
      name: Open LLM Leaderboard
---
![img_text](./assets/tmpd2xdo_x4.png)


Proctora is a MoE model made of
- OpenPipe/mistral-ft-optimized-1227 as a base model
- SanjiWatsuki/Kunoichi-7B as a first expert dedicated to RP tasks.
- samir-fama/SamirGPT-v1 as a second expert for factual answers.

Being based on Mixtral architecture it has a natural context length of 32K, which is great.

On Openllm leaderboard it achieves a score of 71.88 which is interesting to some extent but does not really reflect the intented capacities of the model.

This model has been originally produced as a result of experimentations with mergekit. Then among my collection of LLMs, Proctora has been selected to be the "grader" in an AI-RPG evaluation suite that I am currently building. Indeed, it produced the intended grades according to given rubrics more often than other "higher performing" models in the leaderboard. 

However, I also tested it in various RP scenarii using text-generation-webui (putting the character card in the system parameters and/or other world information), and I was quite impressed by the quality of the logic (relatively to other popular RP models). For example, it took in account special powers limitations better than other models. Or it managed curse activations and weaknesses better than other models that are about twice the size. Also when acting as the player (and the user being the game master), Proctora was not only able to play in character but also sometimes to make clever decision to achieve its objectives.

Having the excellent SanjiWatsuki/Kunoichi-7B as an expert, the model is uncensored. Use with caution.


[Support Me Here!](https://ko-fi.com/karkomagor)

[My Blog](https://aitravelnotes.blogspot.com/)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Karko__Proctora)

|             Metric              |Value|
|---------------------------------|----:|
|Avg.                             |71.88|
|AI2 Reasoning Challenge (25-Shot)|67.83|
|HellaSwag (10-Shot)              |86.68|
|MMLU (5-Shot)                    |65.49|
|TruthfulQA (0-shot)              |59.55|
|Winogrande (5-shot)              |79.79|
|GSM8k (5-shot)                   |71.95|