A fine-tuned model based on the DeBERTaV3 model of Microsoft and fine-tuned on Glue QQP, which detects the linguistical similarities between two questions and whether they are duplicates questions or different.

Model Hyperparameters

epoch=4
per_device_train_batch_size=32
per_device_eval_batch_size=16
lr=2e-5
weight_decay=1e-2
gradient_checkpointing=True
gradient_accumulation_steps=8

Model Performance

{"Training Loss": 0.132400,
 "Validation Loss": 0.217410,
 "Validation Accuracy": 0.917969
}

Model Dependencies

{"Main Model": "microsoft/deberta-v3-base",
 "Dataset": "SetFit/qqp"
}

Training Monitoring & Performance

Model Testing

import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model_name = "AI-Ahmed/deberta-v3-base-funetuned-cls-qqa"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenized_input = tokenizer("How is the life of a math student? Could you describe your own experiences? Which level of preparation is enough for the exam jlpt5?", return_tensors="pt")

with torch.no_grad():
  logits = model(**tokenized_input).logits

predicted_class_id = logits.argmax().item()
model.config.id2label[predicted_class_id]

Information Citation

@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}
Downloads last month
37
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train AI-Ahmed/deberta-v3-base-funetuned-cls-qqa

Evaluation results