Quantization : Q2_K (using Llama.cpp)

  • llm_load_print_meta: model type = 70B
  • llm_load_print_meta: model ftype = Q2_K - Medium
  • llm_load_print_meta: model params = 70.55 B
  • llm_load_print_meta: model size = 24.56 GiB (2.99 BPW)
  • llama_model_loader: - type f32: 162 tensors
  • llama_model_loader: - type q2_K: 321 tensors
  • llama_model_loader: - type q3_K: 160 tensors
  • llama_model_loader: - type q5_K: 80 tensors
  • llama_model_loader: - type q6_K: 1 tensors

MMLU Result : 74.89%

Category STEM: 66.09% (18 subjects)

  • high_school_chemistry: 64.04%
  • high_school_mathematics: 46.67%
  • abstract_algebra: 48.00%
  • computer_security: 84.00%
  • college_computer_science: 61.62%
  • college_chemistry: 53.00%
  • conceptual_physics: 74.89%
  • high_school_statistics: 68.06%
  • college_mathematics: 44.00%
  • college_biology: 88.19%
  • college_physics: 52.94%
  • elementary_mathematics: 64.81%
  • high_school_biology: 88.71%
  • high_school_physics: 57.62%
  • machine_learning: 56.25%
  • astronomy: 88.16%
  • electrical_engineering: 69.66%
  • high_school_computer_science: 79.00%

Category humanities: 79.28% (13 subjects)

  • world_religions: 84.80%
  • high_school_us_history: 89.71%
  • moral_disputes: 77.75%
  • high_school_world_history: 88.61%
  • formal_logic: 62.70%
  • international_law: 85.12%
  • jurisprudence: 76.85%
  • professional_law: 59.58%
  • logical_fallacies: 83.44%
  • philosophy: 74.28%
  • moral_scenarios: 78.66%
  • prehistory: 84.26%
  • high_school_european_history: 84.85%

Category social sciences: 82.11% (12 subjects)

  • high_school_geography: 86.36%
  • high_school_psychology: 91.19%
  • sociology: 87.56%
  • high_school_microeconomics: 86.55%
  • professional_psychology: 76.80%
  • security_studies: 77.55%
  • us_foreign_policy: 91.00%
  • public_relations: 70.91%
  • high_school_government_and_politics: 93.78%
  • econometrics: 61.40%
  • human_sexuality: 81.68%
  • high_school_macroeconomics: 80.51%

Category other (business, health, misc.): 75.95% (14 subjects)

  • virology: 53.61%
  • college_medicine: 72.25%
  • global_facts: 62.00%
  • miscellaneous: 87.36%
  • medical_genetics: 84.00%
  • human_aging: 78.48%
  • nutrition: 83.33%
  • marketing: 88.89%
  • anatomy: 71.85%
  • professional_medicine: 88.24%
  • professional_accounting: 56.03%
  • management: 82.52%
  • clinical_knowledge: 80.75%
  • business_ethics: 74.00%

Overall correct rate: 74.89% Total subjects evaluated: 57

Perplexity 6.6865 +/- 0.04336

(using wikitext-2-raw/wiki.test.raw)

Downloads last month
55
GGUF
Model size
70.6B params
Architecture
llama

2-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for j30231/Llama-3.3-70B-Instruct_Q2_K.gguf

Quantized
(56)
this model