deberta-v3-base-zyda-2-transformed-readability

This model is a fine-tuned version of agentlans/deberta-v3-base-zyda-2 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0267
  • Mse: 0.0267

Model description

More information needed

Intended uses & limitations

Example use:

import torch
import numpy as np
from transformers import AutoModelForSequenceClassification, AutoTokenizer

# Load model and tokenizer
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

model_name = "agentlans/deberta-v3-base-zyda-2-readability"
model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=1).to(device)
tokenizer = AutoTokenizer.from_pretrained(model_name)

# Function to perform inference
def predict_score(text):
    inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True).to(device)
    with torch.no_grad():
        logits = model(**inputs).logits
    return logits.item()

# Function to transform the score back to educational grade level
def grade_level(y):
    # Updated parameters
    lambda_ = 0.8766912
    mean = 7.908629
    sd = 3.339119
    
    # Unstandardize the data
    y_unstd = (-y) * sd + mean
    
    # Invert the Box-Cox transformation
    return np.power((y_unstd * lambda_ + 1), (1 / lambda_))

# Example usage
input_text = "The mitochondria is the powerhouse of the cell."
readability = predict_score(input_text)
grade = grade_level(readability)
print(f"Predicted score: {readability}\nGrade: {grade}")

Example output:

Text Readability Grade
I like to eat apples. 1.95 2.5
The cat is on the mat. 1.93 2.6
The sun is shining brightly today. 1.85 2.9
Birds are singing in the trees. 1.84 2.9
The quick brown fox jumps over the lazy dog. 1.74 3.3
She enjoys reading books in her free time. 1.69 3.5
After a long day at work, he finally relaxed with a cup of tea. 1.16 5.6
As the storm approached, the sky turned a deep shade of gray, casting an eerie shadow over the landscape. 0.54 8.2
Despite the challenges they faced, the team remained resolute in their pursuit of excellence and innovation. -0.49 12.8
In a world increasingly dominated by technology, the delicate balance between human connection and digital interaction has become a focal point of contemporary discourse. -2.01 20.0

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 64
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 3.0

Training results

Training Loss Epoch Step Validation Loss Mse
0.0288 1.0 13589 0.0286 0.0286
0.023 2.0 27178 0.0272 0.0272
0.0189 3.0 40767 0.0267 0.0267

Framework versions

  • Transformers 4.46.3
  • Pytorch 2.5.1+cu124
  • Datasets 3.1.0
  • Tokenizers 0.20.3
Downloads last month
29
Safetensors
Model size
184M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for agentlans/deberta-v3-base-zyda-2-readability

Finetuned
(4)
this model

Dataset used to train agentlans/deberta-v3-base-zyda-2-readability