This checkpoint has been trained for the POS task using the CoNLL 2002-es dataset.

This checkpoint was created from Bertin Gaussian 512, which is a RoBERTa-base model trained from scratch in Spanish. Information on this base model may be found at its own card and at deeper detail on the main project card.

The training dataset for the base model is mc4 subsampling documents to a total of about 50 million examples. Sampling is biased towards average perplexity values (using a Gaussian function), discarding more often documents with very large values (poor quality) of very small values (short, repetitive texts).

This is part of the Flax/Jax Community Week, organised by HuggingFace and TPU usage sponsored by Google.

Team members

Downloads last month
23
Safetensors
Model size
124M params
Tensor type
I64
·
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.