albert-base-v2-squad

This model is a fine-tuned version of albert-base-v2 on the SQuAD 1.1 and adversarial_qa datasets. It achieves the following results on the SQuAD 1.1 evaluation set:

  • Exact Match(EM): 84.68
  • F1: 91.40

Inference API

You can test the model directly using the Hugging Face Inference API:

from transformers import pipeline

# Load the pipeline
qa_pipeline = pipeline("question-answering", model="xichenn/albert-base-v2-squad")

# Run inference
result = qa_pipeline(question="What is the capital of France?", context="France is a country in Europe. Its capital is Paris.")

print(result)
Downloads last month
18
Safetensors
Model size
11.1M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for xichenn/albert-base-v2-squad

Finetuned
(172)
this model

Datasets used to train xichenn/albert-base-v2-squad

Evaluation results