FINAL_Bench

Team
community
Activity Feed

AI & ML interests

None defined yet.

Recent Activity

Articles

SeaWolf-AIย 
posted an update about 15 hours ago
view post
Post
902
๐ŸŸ๏ธ Smol AI WorldCup: A 4B Model Just Beat 8B โ€” Here's the Data

We evaluated 18 small language models from 12 makers on 125 questions across 7 languages. The results challenge the assumption that bigger is always better.

Community Article: https://huggingface.co/blog/FINAL-Bench/smol-worldcup
Live Leaderboard: ginigen-ai/smol-worldcup
Dataset: ginigen-ai/smol-worldcup

What we found:

โ†’ Gemma-3n-E4B (4B, 2GB RAM) outscores Qwen3-8B (8B, 5.5GB). Doubling parameters gained only 0.4 points. RAM cost: 2.75x more.

โ†’ GPT-OSS-20B fits in 1.5GB yet matches Champions-league dense models requiring 8.5GB. MoE architecture is the edge AI game-changer.

โ†’ Thinking models hurt structured output. DeepSeek-R1-7B scores 8.7 points below same-size Qwen3-8B and runs 2.7x slower.

โ†’ A 1.3B model fabricates confident fake content 80% of the time when prompted with nonexistent entities. Qwen3 family hits 100% trap detection across all sizes.

โ†’ Qwen3-1.7B (1.2GB) outscores Mistral-7B, Llama-3.1-8B, and DeepSeek-R1-14B. Latest architecture at 1.7B beats older architecture at 14B.

What makes this benchmark different?

Most benchmarks ask "how smart?" โ€” we measure five axes simultaneously: Size, Honesty, Intelligence, Fast, Thrift (SHIFT). Our ranking metric WCS = sqrt(SHIFT x PIR_norm) rewards models that are both high-quality AND efficient. Smart but massive? Low rank. Tiny but poor? Also low.

Top 5 by WCS:
1. GPT-OSS-20B โ€” WCS 82.6 โ€” 1.5GB โ€” Raspberry Pi tier
2. Gemma-3n-E4B โ€” WCS 81.8 โ€” 2.0GB โ€” Smartphone tier
3. Llama-4-Scout โ€” WCS 79.3 โ€” 240 tok/s โ€” Fastest model
4. Qwen3-4B โ€” WCS 76.6 โ€” 2.8GB โ€” Smartphone tier
5. Qwen3-1.7B โ€” WCS 76.1 โ€” 1.2GB โ€” IoT tier

Built in collaboration with the FINAL Bench research team. Interoperable with ALL Bench Leaderboard for full small-to-large model comparison.

Dataset is open under Apache 2.0 (125 questions, 7 languages). We welcome new model submissions.
SeaWolf-AIย 
published an article about 15 hours ago
view article
Article

๐ŸŸ๏ธ Smol AI WorldCup: A 5-Axis Benchmark That Reveals What Small Language Models Can Really Do

โ€ข
23
SeaWolf-AIย 
posted an update 2 days ago
view post
Post
7131
๐Ÿš€ Introducing MARL โ€” Runtime Middleware That Reduces LLM Hallucination Without Fine-Tuning

Now available on PyPI ยท GitHub ยท ClawHub ยท HuggingFace
AI models sense they could be wrong, but they can't actually fix what's broken.

๐Ÿค— Live A/B test: VIDraft/MARL

We evaluated 9 SOTA models (GPT-5.2, Claude Opus 4.6, Gemini 3 Pro, etc.) across 1,800 assessments in FINAL Bench and found a 39.2%p gap between "recognizing potential errors (MA=0.694)" and "actually finding and fixing them (ER=0.302)."

MARL (Model-Agnostic Runtime Middleware for LLMs) was built to close this metacognitive gap. It decomposes a single LLM call into a 5-stage expert pipeline (Hypothesis โ†’ Solver โ†’ Auditor โ†’ Adversarial Verifier โ†’ Synthesizer), transforming "answer in one shot" into "think, doubt, correct, and rewrite."

No weight modification โ€” works instantly with GPT-5.4, Claude, Gemini, Llama, or any OpenAI API-compatible LLM by changing one line: base_url. Ships with 9 domain-specific emergence engines (invention, pharma, genomics, chemistry, ecology, law, and more โ€” 5,538 expert data items) activated by a simple tag like model="gpt-5.4::pharma".

pip install marl-middleware

MARL is also officially registered on ClawHub, the skill marketplace of OpenClaw โ€” an AI agent platform with 260K+ developers and 3,200+ skills. It's the first middleware in the Reasoning Enhancement category. One command โ€” clawhub install marl-middleware โ€” gives your AI agent a metacognition upgrade.

๐Ÿ“ Technical deep dive: https://huggingface.co/blog/FINAL-Bench/marl-middleware
๐Ÿ“ฆ PyPI: https://pypi.org/project/marl-middleware/
๐Ÿ™ GitHub: https://github.com/Vidraft/MARL
๐Ÿฆ€ ClawHub: https://clawhub.ai/Cutechicken99/marl-middleware

#MARL #LLM #Hallucination #Metacognition #MultiAgent #AIMiddleware #FINALBench #OpenClaw #ClawHub #PyPI #AGI #HuggingFace #ReasoningAI #SelfCorrection #GlassBoxAI
SeaWolf-AIย 
published an article 2 days ago
view article
Article

MARL: Runtime Middleware That Reduces LLM Hallucination Without Fine-Tuning

โ€ข
13
SeaWolf-AIย 
posted an update 3 days ago
view post
Post
6004
ALL Bench Leaderboard โ€” Structural Problems in AI Benchmarking and the Case for Unified Evaluation

FINAL-Bench/all-bench-leaderboard

The AI benchmark ecosystem has three structural problems. Major benchmarks like MMLU have surpassed 90%, losing discriminative power. Most leaderboards publish unverified self-reported scores โ€” our cross-verification found Claude Opus 4.6's ARC-AGI-2 listed as 37.6% (actual: 68.8%), Gemini 3.1 Pro as 88.1% (actual: 77.1%). OpenAI's own audit confirmed 59.4% of SWE-bench Verified tasks are defective, yet it remains widely used.

ALL Bench addresses this by comparing 91 models across 6 modalities (LLM ยท VLM ยท Agent ยท Image ยท Video ยท Music) with 3-tier confidence badges (โœ“โœ“ cross-verified ยท โœ“ single-source ยท ~ self-reported). Composite scoring uses a 5-Axis Framework and replaces SWE-Verified with contamination-resistant LiveCodeBench.

Key finding: metacognition is the largest blind spot. FINAL Bench shows Error Recovery explains 94.8% of self-correction variance, yet only 9 of 42 models are even measured. The 9.2-point spread (Kimi K2.5: 68.71 โ†’ rank 9: 59.5) is 3ร— the GPQA top-model spread, suggesting metacognition may be the single biggest differentiator among frontier models today.

VLM cross-verification revealed rank reversals โ€” Claude Opus 4.6 leads MMMU-Pro (85.1%) while Gemini 3 Flash leads MMMU (87.6%), producing contradictory rankings between the two benchmarks.

๐Ÿ“Š Article: https://huggingface.co/blog/FINAL-Bench/all-bench
๐Ÿ“ฆ Dataset: FINAL-Bench/ALL-Bench-Leaderboard
โšก GitHub: https://github.com/final-bench/ALL-Bench-Leaderboard
๐Ÿ† Leaderboard: FINAL-Bench/all-bench-leaderboard
๐Ÿงฌ FINAL Bench: FINAL-Bench/Metacognitive
SeaWolf-AIย 
published an article 3 days ago
view article
Article

Structural Problems in AI Benchmarking and the Case for a Unified Evaluation Framework

โ€ข
11
SeaWolf-AIย 
posted an update 7 days ago
view post
Post
4923
ALL Bench โ€” Global AI Model Unified Leaderboard

FINAL-Bench/all-bench-leaderboard

If you've ever tried to compare GPT-5.2 and Claude Opus 4.6 side by side, you've probably hit the same wall: the official Hugging Face leaderboard only tracks open-source models, so the most widely used AI systems simply aren't there. ALL Bench fixes that by bringing closed-source models, open-weight models, and โ€” uniquely โ€” all four teams under South Korea's national sovereign AI program into a single leaderboard. Thirty-one frontier models, one consistent scoring scale.
Scoring works differently here too. Most leaderboards skip benchmarks a model hasn't submitted, which lets models game their ranking by withholding results. ALL Bench treats every missing entry as zero and divides by ten, so there's no advantage in hiding your weak spots.
The ten core benchmarks span reasoning (GPQA Diamond, AIME 2025, HLE, ARC-AGI-2), coding (SWE-bench Verified, LiveCodeBench), and instruction-following (IFEval, BFCL). The standout is FINAL Bench โ€” the world's only benchmark measuring whether a model can catch and correct its own mistakes. It reached rank five in global dataset popularity on Hugging Face in February 2026 and has been covered by Seoul Shinmun, Asia Economy, IT Chosun, and Behind.
Nine interactive charts let you explore everything from composite score rankings and a full heatmap to an open-vs-closed scatter plot. Operational metrics like context window, output speed, and pricing are included alongside benchmark scores.
All data is sourced from Artificial Analysis Intelligence Index v4.0, arXiv technical reports, Chatbot Arena ELO ratings, and the Korean Ministry of Science and ICT's official evaluation results. Updates monthly.
SeaWolf-AIย 
posted an update 12 days ago
view post
Post
2725
AI Is Training on Your Content Without Permission โ€” Fight Back with Invisible Watermarks

FINAL-Bench/security-scan

Most generative AI training data is crawled without consent. Your text gets summarized, images reprocessed, videos clipped โ€” with no way to prove you're the original creator. Existing watermarks are either visible or wiped out by a single AI preprocessing pass.

Detect Before, Track After

Pre-embed โ€” Detect theft without any watermark. Text plagiarism check, image similarity analysis (perceptual hash, SSIM, color histogram, feature matching), and video temporal matching catch copies, edits, and excerpts.

Post-embed โ€” Embed invisible multi-layer watermarks. If one layer is destroyed, others survive independently. Even full removal leaves forensic traces as evidence.

Text: 4 Independent Layers

Four mechanisms work simultaneously: zero-width Unicode characters at morpheme/word boundaries (Korean Kiwi + English NLP), style fingerprinting via synonym-ending-connective substitution, SHA-256 timestamped evidence packages, and punctuation-anchored micro-marks. Each layer uses a different Unicode category, so attacks on one cannot eliminate the others. Full bilingual support, zero readability impact.

34-Attack Defense

7 categories, 34 attacks simulated: Unicode normalization, invisible character removal, homoglyph substitution (9,619 confusables), and AI rewriting. Each scored on Signal (watermark survival) + Trace (forensic evidence of attack) โ€” proving deliberate removal even when watermarks are destroyed.

Image & Video

Images: DCT frequency-domain watermarks surviving JPEG compression and resize. Videos: keyframe watermarking with temporal propagation and majority-vote extraction. Both support pre-embed similarity detection.

Who Is This For

Creators, rights holders needing legal evidence, media companies, and organizations tracking document leaks. Korean/English bilingual, open source, Gradio-based.
ยท
SeaWolf-AIย 
posted an update 15 days ago
view post
Post
4098
Do Bubbles Form When Tens of Thousands of AIs Simulate Capitalism?

We gave LLMs autonomous trading over 30 real tickers at 100x leverage. All went bankrupt in 30 minutes from hallucination. This spawned FINAL Bench (first metacognition benchmark) and AI NPC Trading Arena โ€” tens of thousands of metacognition-equipped AI agents competing under capitalist rules. Humans can only watch.

Live Demo: Heartsync/Prompt-Dump
Article: https://huggingface.co/blog/FINAL-Bench/pumpdump

NPCs form a society: 3-tier memory, self-modifying parameters, mutual criticism, strategy propagation, and a virtual SEC enforcing fines every 20 minutes. Every trade passes 4-stage verification including Brave Search fact-check. FINAL Bench confirmed across 9 SOTA models that AI can say "I might be wrong" (MA 0.694) but cannot actually fix errors (ER 0.302).

Six findings: Bubbles form naturally through knowledge transfer and swarm herding. Identical NPCs diverge irreversibly from their first three trades. Metacognition blocks individual hallucination but not collective herding โ€” this is the key finding. Information asymmetry solidifies hierarchy. Fraud and regulation co-evolve. Criticism improves returns.

Individual intelligence does not guarantee collective intelligence.

Dataset & Paper:
FINAL-Bench/Metacognitive
  • 1 reply
ยท
SeaWolf-AIย 
published an article 15 days ago
view article
Article

Do Bubbles Form When Tens of Thousands of AIs Simulate Capitalism?

โ€ข
17

Welcome!

#1 opened 18 days ago by
SeaWolf-AI