AyoubChLin's picture
Update README.md
1a4496e
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: A new model offers an explanation for how the Galilean satellites formed around the solar system’s largest world. Konstantin Batygin did not set out to solve one of the solar system’s most puzzling mysteries when he went for a run up a hill in Nice, France. Dr. Batygin, a Caltech researcher
datasets:
- AyoubChLin/autotrain-data-anymodel_bbc
- SetFit/bbc-news
co2_eq_emissions:
emissions: 2.359134715120443
license: apache-2.0
metrics:
- accuracy
pipeline_tag: text-classification
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 48900118383
- CO2 Emissions (in grams): 2.3591
## Validation Metrics
- Loss: 0.116
- Accuracy: 0.978
- Macro F1: 0.978
- Micro F1: 0.978
- Weighted F1: 0.978
- Macro Precision: 0.978
- Micro Precision: 0.978
- Weighted Precision: 0.978
- Macro Recall: 0.978
- Micro Recall: 0.978
- Weighted Recall: 0.978
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/AyoubChLin/autotrain-anymodel_bbc-48900118383
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("AyoubChLin/autotrain-anymodel_bbc-48900118383", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("AyoubChLin/autotrain-anymodel_bbc-48900118383", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```