Model Card for roadz/dv-finetuned-211124

This model is fine-tuned for evaluating LLM outputs in RAG scenarios, focusing on:

  • Hallucination detection
  • Attribution accuracy
  • Summary completeness
  • Response relevancy

Model Details

Model Architecture

  • Base Model: LLaMA-3.1-8B
  • Architecture Type: llama
  • Parameters: Not specified
  • Training Type: Fine-tuned for evaluation

Hardware Requirements

  • Minimum GPU Memory: 16GB
  • Recommended GPU Memory: 24GB
  • Format: SafeTensors

Usage

This model is designed for the De-Val subnet and requires specific pipeline code for evaluation tasks.

Generation Configuration

  • Max Length: Not specified
  • Temperature: 0.6
  • Top-p: 0.9
  • Top-k: 50

Training

The model was fine-tuned on evaluation tasks including:

  • Hallucination detection scenarios
  • Attribution verification tasks
  • Summary completeness assessment
  • Response relevancy evaluation

Limitations

  • Designed specifically for evaluation tasks
  • Requires De-Val pipeline code
  • Not intended for general text generation

Last Updated

2024-11-21

Downloads last month
10
Safetensors
Model size
8.03B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.