modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
β | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
β | likes
float64 0
712
β | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pravesh/wav2vec2-large-xls-r-300m-hindi-colabrathee-intel | bad2c6336d229e13553bc0e88f2c18832b3ebcad | 2022-05-31T07:04:30.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | pravesh | null | pravesh/wav2vec2-large-xls-r-300m-hindi-colabrathee-intel | 0 | null | transformers | 37,800 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-hindi-colabrathee-intel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hindi-colabrathee-intel
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
theojolliffe/bart-cnn-science-v3-e4 | b9fc8cb7e6065dc2b41404a054146472b365feca | 2022-05-31T09:41:01.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/bart-cnn-science-v3-e4 | 0 | null | transformers | 37,801 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-cnn-science-v3-e4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-science-v3-e4
This model is a fine-tuned version of [theojolliffe/bart-cnn-science](https://huggingface.co/theojolliffe/bart-cnn-science) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8265
- Rouge1: 53.0296
- Rouge2: 33.4957
- Rougel: 35.8876
- Rougelsum: 50.0786
- Gen Len: 141.5926
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 1.0 | 398 | 0.9965 | 52.4108 | 32.1506 | 35.0281 | 50.0368 | 142.0 |
| 1.176 | 2.0 | 796 | 0.8646 | 52.7182 | 32.9681 | 35.1454 | 49.9527 | 141.8333 |
| 0.7201 | 3.0 | 1194 | 0.8354 | 52.5417 | 32.6428 | 35.8703 | 49.8037 | 142.0 |
| 0.5244 | 4.0 | 1592 | 0.8265 | 53.0296 | 33.4957 | 35.8876 | 50.0786 | 141.5926 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/xvbones | bd45302e4e3132418393f36e2c3834f86656b74a | 2022-05-31T08:53:32.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/xvbones | 0 | null | transformers | 37,802 | ---
language: en
thumbnail: http://www.huggingtweets.com/xvbones/1653987207699/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1136186352268132354/PEn3hUdJ_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">tommy π¬π§</div>
<div style="text-align: center; font-size: 14px;">@xvbones</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from tommy π¬π§.
| Data | tommy π¬π§ |
| --- | --- |
| Tweets downloaded | 3161 |
| Retweets | 205 |
| Short tweets | 603 |
| Tweets kept | 2353 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2gnhi9y4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @xvbones's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1hbqc87v) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1hbqc87v/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/xvbones')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
kamalkraj/bert-base-uncased-squad-v1.0-finetuned | 4b54f4fac95f2ec34039e928b046f4c967701944 | 2022-05-31T10:25:59.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | kamalkraj | null | kamalkraj/bert-base-uncased-squad-v1.0-finetuned | 0 | null | transformers | 37,803 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-squad-v1.0-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-squad-v1.0-finetuned
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00012
- train_batch_size: 48
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.10.0+cu113
- Datasets 1.17.0
- Tokenizers 0.12.1
|
theojolliffe/bart-cnn-science-v3-e5 | 88af7d4c16ba03f3f2008f83fefe2087414434a9 | 2022-05-31T10:55:17.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/bart-cnn-science-v3-e5 | 0 | null | transformers | 37,804 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-cnn-science-v3-e5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-science-v3-e5
This model is a fine-tuned version of [theojolliffe/bart-cnn-science](https://huggingface.co/theojolliffe/bart-cnn-science) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8090
- Rouge1: 54.0053
- Rouge2: 35.5018
- Rougel: 37.3204
- Rougelsum: 51.5456
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 1.0 | 398 | 0.9935 | 51.9669 | 31.8139 | 34.4748 | 49.5311 | 141.7407 |
| 1.1747 | 2.0 | 796 | 0.8565 | 51.7344 | 31.7341 | 34.3917 | 49.2488 | 141.7222 |
| 0.7125 | 3.0 | 1194 | 0.8252 | 52.829 | 33.2332 | 35.8865 | 50.1883 | 141.5556 |
| 0.4991 | 4.0 | 1592 | 0.8222 | 53.582 | 33.4906 | 35.7232 | 50.589 | 142.0 |
| 0.4991 | 5.0 | 1990 | 0.8090 | 54.0053 | 35.5018 | 37.3204 | 51.5456 | 142.0 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
kamalkraj/bert-base-uncased-squad-v2.0-finetuned | df254ec6b3d72b17ea96833774484663bf8100af | 2022-05-31T11:44:58.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | kamalkraj | null | kamalkraj/bert-base-uncased-squad-v2.0-finetuned | 0 | null | transformers | 37,805 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: bert-base-uncased-squad-v2.0-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-squad-v2.0-finetuned
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00012
- train_batch_size: 48
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.10.0+cu113
- Datasets 1.17.0
- Tokenizers 0.12.1
|
huggingtweets/binance-dydx-magiceden | 712fa0b65abbc6de05158e53d82db11e445024ea | 2022-05-31T11:34:01.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/binance-dydx-magiceden | 0 | null | transformers | 37,806 | ---
language: en
thumbnail: http://www.huggingtweets.com/binance-dydx-magiceden/1653996837144/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1529814669493682176/BqZU57Cf_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1490589455786573824/M5_HK15F_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1364590285255290882/hjnIm9bV_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI CYBORG π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Magic Eden πͺ & Binance & dYdX</div>
<div style="text-align: center; font-size: 14px;">@binance-dydx-magiceden</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Magic Eden πͺ & Binance & dYdX.
| Data | Magic Eden πͺ | Binance | dYdX |
| --- | --- | --- | --- |
| Tweets downloaded | 3249 | 3250 | 1679 |
| Retweets | 141 | 194 | 463 |
| Short tweets | 908 | 290 | 40 |
| Tweets kept | 2200 | 2766 | 1176 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/28typldl/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @binance-dydx-magiceden's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/196gmkng) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/196gmkng/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/binance-dydx-magiceden')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/magiceden | 8044a2914e378db23a563c4e25bc1a183272234f | 2022-05-31T11:45:39.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/magiceden | 0 | null | transformers | 37,807 | ---
language: en
thumbnail: http://www.huggingtweets.com/magiceden/1653997534626/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1529814669493682176/BqZU57Cf_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Magic Eden πͺ</div>
<div style="text-align: center; font-size: 14px;">@magiceden</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Magic Eden πͺ.
| Data | Magic Eden πͺ |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 141 |
| Short tweets | 908 |
| Tweets kept | 2200 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/9t2x97k9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @magiceden's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/32j65yat) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/32j65yat/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/magiceden')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
bmichele/poetry-generation-nextline-mbart-ws-sv-test | fc54fef6c4e43a10115046ae3ccc0accc9b292ac | 2022-05-31T13:47:27.000Z | [
"pytorch"
] | null | false | bmichele | null | bmichele/poetry-generation-nextline-mbart-ws-sv-test | 0 | null | null | 37,808 | Entry not found |
mikehemberger/topex | bfb169a1ef98ed90cd31c1fb978697c002f2ec33 | 2022-05-31T16:51:57.000Z | [
"pytorch",
"vit",
"image-classification",
"transformers"
] | image-classification | false | mikehemberger | null | mikehemberger/topex | 0 | null | transformers | 37,809 | Entry not found |
tclong/wav2vec2-base-vios-v1 | 0da9b8eff7d0f7771edc5a2c28501ecdb2b6ce59 | 2022-06-02T11:33:05.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | tclong | null | tclong/wav2vec2-base-vios-v1 | 0 | null | transformers | 37,810 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-vios-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-vios-v1
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6352
- Wer: 0.5161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 5.7944 | 3.98 | 1000 | 1.7427 | 1.0387 |
| 0.7833 | 7.97 | 2000 | 0.4026 | 0.4364 |
| 0.4352 | 11.95 | 3000 | 0.3967 | 0.4042 |
| 0.4988 | 15.94 | 4000 | 0.5446 | 0.4632 |
| 0.7822 | 19.92 | 5000 | 0.6563 | 0.5491 |
| 0.8496 | 23.9 | 6000 | 0.5828 | 0.5045 |
| 0.8072 | 27.89 | 7000 | 0.6318 | 0.5109 |
| 0.8336 | 31.87 | 8000 | 0.6352 | 0.5161 |
| 0.8311 | 35.86 | 9000 | 0.6352 | 0.5161 |
| 0.839 | 39.84 | 10000 | 0.6352 | 0.5161 |
| 0.8297 | 43.82 | 11000 | 0.6352 | 0.5161 |
| 0.8288 | 47.81 | 12000 | 0.6352 | 0.5161 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
voidful/wav2vec2-xlsr-53-espeak-librispeech-ft | 00841b1da6316d7eca5dc56867b27a838a8b01cb | 2022-06-04T12:02:47.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | voidful | null | voidful/wav2vec2-xlsr-53-espeak-librispeech-ft | 0 | null | transformers | 37,811 | Entry not found |
huggingtweets/gretathunberg | 480fc5a85a9197c673d5763dfa616beea70836f4 | 2022-05-31T21:00:03.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/gretathunberg | 0 | null | transformers | 37,812 | ---
language: en
thumbnail: http://www.huggingtweets.com/gretathunberg/1654030798001/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1459213153301053442/rL5hhpAI_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Greta Thunberg</div>
<div style="text-align: center; font-size: 14px;">@gretathunberg</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Greta Thunberg.
| Data | Greta Thunberg |
| --- | --- |
| Tweets downloaded | 3247 |
| Retweets | 2424 |
| Short tweets | 28 |
| Tweets kept | 795 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3ulsdxk8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gretathunberg's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/22acoony) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/22acoony/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gretathunberg')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
jppaolim/v39_Best20Epoch | 1f9bf67e857808b702d3ad931de815df61504717 | 2022-05-31T21:42:21.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | jppaolim | null | jppaolim/v39_Best20Epoch | 0 | null | transformers | 37,813 | # My Story model
Arthur goes to the beach. Arthur is feeling very hot and bored. He decides to go to the beach. He goes to the beach. He spends the day swimming. Arthur cannot wait for the next day to go swimming.
Arthur goes to the beach. Arthur wants to go to the beach. He gets a map. He looks at the map. He goes to the beach. He goes to the beach.
Arthur goes to the beach. Arthur has been working hard all summer. He has been working hard every day. One day his boss asks him to come to work. Arthur is happy to see that his hard work is paying off. Arthur is so glad he took the chance to go to the beach.
Arthur goes to the beach. Arthur is walking to the beach. He sees a small boy playing in the sand. The boy tells Arthur to leave. Arthur tells the boy he doesn't want to go to the beach. Arthur leaves the beach.
Arthur goes to the beach. Arthur is a young boy who lived in a very small town. He wanted to feel like a big city kid. He drove to the coast and swam in the ocean. When he got home, his mom told him to pack up and come back. Arthur packed up and didn't go to the beach anymore.
Arthur goes to the beach. Arthur is bored at home. He decides to go to the local beach. He goes down to the water. Arthur waves. He is glad he went for a walk down the beach.
Arthur goes to the beach. Arthur wants to go to the beach. He has been looking forward to this for a week. He gets to the beach and everything feels perfect. He gets to the water and it is very nice. Arthur has the best day ever.
Arthur goes to the beach. Arthur is going to the beach tomorrow. He is going to play in the ocean. He can't find his keys. He is starting to panic. Arthur finally finds his keys in his car.
Arthur goes to the beach. Arthur is going to the beach tomorrow. He has been working hard all week. He is going to the beach with his friends. Arthur and his friends get in the car to go to the beach. Arthur swims all day and goes to sleep.
Arthur goes to the beach. Arthur wants to go to the beach. He goes to the beach. He swims in the ocean. He has fun. Arthur has a good day.
Arthur goes to the beach. Arthur is a young man. He likes to surf. He decides to go to the beach. He spends the whole day at the beach. He goes to the ocean and has fun.
Arthur goes to the beach. Arthur is a young man. He wants to go to the beach. He gets on his car and drives to the beach. He spends the entire day at the beach. Arthur has the best day ever at the beach.
Arthur goes to the beach. Arthur is a young man. He likes to surf and swim. He decides to go to the beach. Arthur swam all day long. He had a great day at the beach.
Arthur goes to the beach. Arthur is going to the beach tomorrow. He has been working all day, but hasn't been swimming. He decides to go for a swim anyway and cool off. He spends the next few days playing in the ocean. Arthur has the time of his life.
Arthur goes to the beach. Arthur is a young boy who lived in a very small town. He wanted to go to the beach but his dad said no. Arthur asked his dad if he could go alone. Arthur's dad told him that they couldn't afford to go together. Arthur was sad that his dad wouldn't go with him to the beach.
|
skr3178/xlm-roberta-base-finetuned-panx-de | 3dd7f4d94a3698ec129db495509af67ddc3a768a | 2022-05-31T22:09:30.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | skr3178 | null | skr3178/xlm-roberta-base-finetuned-panx-de | 0 | null | transformers | 37,814 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8627004891366169
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1363
- F1: 0.8627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2539 | 1.0 | 525 | 0.1697 | 0.8179 |
| 0.1317 | 2.0 | 1050 | 0.1327 | 0.8516 |
| 0.0819 | 3.0 | 1575 | 0.1363 | 0.8627 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
jppaolim/v40_NeoSmall | 4bfb518bfe206f0aad0f40c12cfa1114e94a153a | 2022-05-31T22:23:08.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | jppaolim | null | jppaolim/v40_NeoSmall | 0 | null | transformers | 37,815 | # My Story model
Arthur goes to the beach. Arthur is in the ocean. He is enjoying the water. He cannot wait for the sun to rise. He goes to the beach. It is very hot outside.
Arthur goes to the beach. Arthur is going to the beach. He is going to the beach. He is going to go swimming. He feels a breeze on his shirt. He feels very relaxed.
Arthur goes to the beach. Arthur is walking on the beach. He notices a sign for the beach club. He asks for a cab. He gets a cab to go to the beach. Arthur and his friends go to the beach together.
Arthur goes to the beach. Arthur was excited to go to the beach. He drove his car to the beach. When he got there, he was amazed at the waves. The waves had a huge sandcastle. Arthur went to the beach and enjoyed the beach.
Arthur goes to the beach. Arthur is playing in the sand with his friends. He is having a great time, and they are all laughing. They all seem to be enjoying themselves. Arthur decides he has to leave. Arthur is sad that he will not be able to go to the beach.
Arthur goes to the beach. Arthur wants to go to the beach. He decides to go to the beach. He sees a sign for the beach. He goes to the beach. Arthur is happy to go to the beach.
Arthur goes to the beach. Arthur is at the beach. He is playing with his friends. They go swimming. Arthur is caught in a water. Arthur is taken to the beach.
Arthur goes to the beach. Arthur is in the ocean. He is bored. He decides to go to the beach. He is bored for a few hours. Arthur leaves the beach.
Arthur goes to the beach. Arthur is out swimming. He is going to the beach. He goes to the beach. He goes to the beach. He goes to the beach.
Arthur goes to the beach. Arthur was at the beach with his friends. They went swimming and laid out on the sand. They found a beach they liked. They decided to go to the beach and play. They were so happy that they decided to go back to the beach.
Arthur goes to the beach. Arthur is at the beach with his family. They are going to go to the beach. Arthur is very excited. He is going to go to the beach. Arthur is happy that he went to the beach.
Arthur goes to the beach. Arthur was at the beach with his friends. They were having a great time. They all went to the beach. They had a great time. Arthur is very happy.
Arthur goes to the beach. Arthur is bored. He decides to go to the beach. He goes to the beach. He goes to the beach. He is happy that he went to the beach.
Arthur goes to the beach. Arthur is bored. He decides to go to the beach. He is very bored. He decides to go to the beach. Arthur is happy that he went to the beach.
Arthur goes to the beach. Arthur is on his way to the beach. He is going to the beach. He is going to the beach. He is going to the beach. Arthur is going to the beach.
|
skr3178/xlm-roberta-base-finetuned-panx-de-fr | 20315cc1e9ca834750316a98caa11e8ee6fd50c5 | 2022-05-31T22:37:32.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | skr3178 | null | skr3178/xlm-roberta-base-finetuned-panx-de-fr | 0 | null | transformers | 37,816 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1644
- F1: 0.8617
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2891 | 1.0 | 715 | 0.1780 | 0.8288 |
| 0.1471 | 2.0 | 1430 | 0.1627 | 0.8509 |
| 0.0947 | 3.0 | 2145 | 0.1644 | 0.8617 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
skr3178/xlm-roberta-base-finetuned-panx-fr | 487e56bc1f9c5c516661cf1f09cf9c7001949cc4 | 2022-05-31T22:56:49.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | skr3178 | null | skr3178/xlm-roberta-base-finetuned-panx-fr | 0 | null | transformers | 37,817 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.835464333781965
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2867
- F1: 0.8355
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5817 | 1.0 | 191 | 0.3395 | 0.7854 |
| 0.2617 | 2.0 | 382 | 0.2856 | 0.8278 |
| 0.1708 | 3.0 | 573 | 0.2867 | 0.8355 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
skr3178/xlm-roberta-base-finetuned-panx-it | 6ca93c1d40bbb8c2a7df3d87cd6669f22603f55d | 2022-05-31T23:14:06.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | skr3178 | null | skr3178/xlm-roberta-base-finetuned-panx-it | 0 | null | transformers | 37,818 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8247845711940912
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2421
- F1: 0.8248
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.809 | 1.0 | 70 | 0.3380 | 0.7183 |
| 0.2939 | 2.0 | 140 | 0.2582 | 0.7977 |
| 0.1813 | 3.0 | 210 | 0.2421 | 0.8248 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
skr3178/xlm-roberta-base-finetuned-panx-en | c34a56a5c7d52d37e1e11a94bd17a1985e41e782 | 2022-05-31T23:31:12.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | skr3178 | null | skr3178/xlm-roberta-base-finetuned-panx-en | 0 | null | transformers | 37,819 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.692179700499168
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3921
- F1: 0.6922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1465 | 1.0 | 50 | 0.5838 | 0.4777 |
| 0.5055 | 2.0 | 100 | 0.4477 | 0.6374 |
| 0.3713 | 3.0 | 150 | 0.3921 | 0.6922 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
skr3178/xlm-roberta-base-finetuned-panx-all | 36e4754f3b373ec510abd585ecb882d73ad203a0 | 2022-05-31T23:55:44.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | skr3178 | null | skr3178/xlm-roberta-base-finetuned-panx-all | 0 | null | transformers | 37,820 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1752
- F1: 0.8557
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3 | 1.0 | 835 | 0.1862 | 0.8114 |
| 0.1552 | 2.0 | 1670 | 0.1758 | 0.8426 |
| 0.1002 | 3.0 | 2505 | 0.1752 | 0.8557 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
logo-data-science/t5-finetuned-eng | 628a80defb1ab1fa18317a91cf912e9c3bf373db | 2022-06-01T09:25:13.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:gpl",
"autotrain_compatible"
] | text2text-generation | false | logo-data-science | null | logo-data-science/t5-finetuned-eng | 0 | null | transformers | 37,821 | ---
license: gpl
---
|
roshnir/mBert-finetuned-mlqa-dev-en-hi | d81c0d6cbe589b5d1010a54ba1e822152a7edc4c | 2022-06-01T09:23:11.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | roshnir | null | roshnir/mBert-finetuned-mlqa-dev-en-hi | 0 | null | transformers | 37,822 | Entry not found |
pravesh/wav2vec2-large-xls-r-300m-Hindi-colab-v4 | da059722c9738f4ae43ea65c493a9bff24b72d9a | 2022-06-01T12:23:43.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | pravesh | null | pravesh/wav2vec2-large-xls-r-300m-Hindi-colab-v4 | 0 | null | transformers | 37,823 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-Hindi-colab-v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-Hindi-colab-v4
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
susghosh/roberta-large-squad | 9877a5f5304d9addde3202ff65e8f74e2ba94600 | 2022-06-01T20:54:10.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | susghosh | null | susghosh/roberta-large-squad | 0 | null | transformers | 37,824 | Entry not found |
tmills/timex-thyme-colon-pubmedbert | 5da811e37a51e1be102d7942b9d790dfad33122b | 2022-06-01T17:04:40.000Z | [
"pytorch",
"cnlpt",
"transformers",
"license:apache-2.0"
] | null | false | tmills | null | tmills/timex-thyme-colon-pubmedbert | 0 | null | transformers | 37,825 | ---
license: apache-2.0
---
|
tmills/event-thyme-colon-pubmedbert | 5b254eb8700ff818ea3a99075a15bb8a3243fe69 | 2022-06-01T17:08:50.000Z | [
"pytorch",
"cnlpt",
"transformers",
"license:apache-2.0"
] | null | false | tmills | null | tmills/event-thyme-colon-pubmedbert | 0 | null | transformers | 37,826 | ---
license: apache-2.0
---
|
jxm/u-PMLM-A | 17fd0a7912328d72478ade337ae7c24ba3a408c9 | 2022-06-01T17:38:06.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | jxm | null | jxm/u-PMLM-A | 0 | null | transformers | 37,827 | Entry not found |
roshnir/mBert-finetuned-mlqa-dev-en | 98687c5e4f42dec4078fc03132026595873f436e | 2022-06-01T18:53:54.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | roshnir | null | roshnir/mBert-finetuned-mlqa-dev-en | 0 | null | transformers | 37,828 | Entry not found |
huggingtweets/disgustingact84-kickswish | 1441e770706717413f2d1990cb1391af0a5cbb6b | 2022-06-01T19:44:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/disgustingact84-kickswish | 0 | null | transformers | 37,829 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1258515252163022848/_O1bOXBQ_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1530279378332041220/1ysZA-S8_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI CYBORG π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Justin Moran & ToxicAct πΊπΈ β½οΈ</div>
<div style="text-align: center; font-size: 14px;">@disgustingact84-kickswish</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Justin Moran & ToxicAct πΊπΈ β½οΈ.
| Data | Justin Moran | ToxicAct πΊπΈ β½οΈ |
| --- | --- | --- |
| Tweets downloaded | 3237 | 3247 |
| Retweets | 286 | 260 |
| Short tweets | 81 | 333 |
| Tweets kept | 2870 | 2654 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3vwd4eeo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @disgustingact84-kickswish's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/24jluur0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/24jluur0/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/disgustingact84-kickswish')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/disgustingact84-kickswish-managertactical | 26ab1ba6f9ce2703bcc2ea0b809c9a952c4516f3 | 2022-06-01T20:24:09.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/disgustingact84-kickswish-managertactical | 0 | null | transformers | 37,830 | ---
language: en
thumbnail: http://www.huggingtweets.com/disgustingact84-kickswish-managertactical/1654115021712/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1530279378332041220/1ysZA-S8_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1258515252163022848/_O1bOXBQ_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1360389551336865797/6RERF_Gg_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI CYBORG π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ToxicAct πΊπΈ β½οΈ & Justin Moran & Tactical Manager</div>
<div style="text-align: center; font-size: 14px;">@disgustingact84-kickswish-managertactical</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ToxicAct πΊπΈ β½οΈ & Justin Moran & Tactical Manager.
| Data | ToxicAct πΊπΈ β½οΈ | Justin Moran | Tactical Manager |
| --- | --- | --- | --- |
| Tweets downloaded | 3247 | 3237 | 3250 |
| Retweets | 260 | 286 | 47 |
| Short tweets | 333 | 81 | 302 |
| Tweets kept | 2654 | 2870 | 2901 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3rtzdst3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @disgustingact84-kickswish-managertactical's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3lhxffhi) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3lhxffhi/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/disgustingact84-kickswish-managertactical')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
lmqg/t5-large-subjqa-books | be5be22dd3525016cac4d72f003a6dae90c58fc8 | 2022-06-02T14:06:26.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-large-subjqa-books | 0 | null | transformers | 37,831 | Entry not found |
huggingtweets/mls_buzz-mlstransfers-transfersmls | 3ed57533333a30e615444f9170027c6204110cbb | 2022-06-01T20:57:13.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/mls_buzz-mlstransfers-transfersmls | 0 | null | transformers | 37,832 | ---
language: en
thumbnail: http://www.huggingtweets.com/mls_buzz-mlstransfers-transfersmls/1654117028998/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1142613360854388738/C49XegQF_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/417716955076763648/_e97ys3b_400x400.jpeg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1229972304689614848/EqOwTdY8_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI CYBORG π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">MLS Buzz & MLS Transfers & Will Forbes</div>
<div style="text-align: center; font-size: 14px;">@mls_buzz-mlstransfers-transfersmls</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from MLS Buzz & MLS Transfers & Will Forbes.
| Data | MLS Buzz | MLS Transfers | Will Forbes |
| --- | --- | --- | --- |
| Tweets downloaded | 3250 | 3248 | 3247 |
| Retweets | 32 | 811 | 1136 |
| Short tweets | 167 | 475 | 359 |
| Tweets kept | 3051 | 1962 | 1752 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/29rusxig/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mls_buzz-mlstransfers-transfersmls's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2qzhkike) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2qzhkike/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mls_buzz-mlstransfers-transfersmls')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
lmqg/t5-large-subjqa-tripadvisor | 1da63a185d382b5bdb32afb50b1a27098ff8a27b | 2022-06-03T00:05:03.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-large-subjqa-tripadvisor | 0 | null | transformers | 37,833 | Entry not found |
lmqg/t5-large-subjqa-grocery | d02f3b52d230f770f2186ad3d0c32e9fa64e6d2c | 2022-06-02T18:12:47.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-large-subjqa-grocery | 0 | null | transformers | 37,834 | Entry not found |
lmqg/t5-large-subjqa-movies | 76dcc4a85e049f80d88c74d86ea82f6add3b287c | 2022-06-02T20:08:07.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-large-subjqa-movies | 0 | null | transformers | 37,835 | Entry not found |
lmqg/t5-large-subjqa-electronics | 28752831479f8395accb27cb63806bc35ba25c6f | 2022-06-02T16:10:34.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-large-subjqa-electronics | 0 | null | transformers | 37,836 | Entry not found |
q2-jlbar/swin-tiny-patch4-window7-224-finetuned-eurosat | 87867e84c73819fa3de2d8e872b8c52d8e0a42a2 | 2022-06-06T14:24:15.000Z | [
"pytorch",
"swin",
"image-classification",
"dataset:image_folder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | q2-jlbar | null | q2-jlbar/swin-tiny-patch4-window7-224-finetuned-eurosat | 0 | null | transformers | 37,837 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9618518518518518
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1199
- Accuracy: 0.9619
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3627 | 0.99 | 47 | 0.1988 | 0.9389 |
| 0.2202 | 1.99 | 94 | 0.1280 | 0.9604 |
| 0.1948 | 2.99 | 141 | 0.1199 | 0.9619 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
lmqg/bart-large-subjqa-restaurants | 5f0c532d3706e07a29827eb425c5f8007271644d | 2022-06-05T01:54:16.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-large-subjqa-restaurants | 0 | null | transformers | 37,838 | Entry not found |
lmqg/bart-large-subjqa-electronics | 8aeb84dc1f30435c41f0582609da1d3801aa8fb3 | 2022-06-02T16:39:00.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-large-subjqa-electronics | 0 | null | transformers | 37,839 | Entry not found |
lmqg/bart-base-subjqa-electronics | 9648a6730585914b8480768e379037030b43988a | 2022-06-02T16:19:36.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-base-subjqa-electronics | 0 | null | transformers | 37,840 | Entry not found |
lmqg/t5-base-subjqa-restaurants | 40bd8f30699609dd8c1e3f8868ad629dbb4ecb03 | 2022-06-02T21:13:18.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-base-subjqa-restaurants | 0 | null | transformers | 37,841 | Entry not found |
lmqg/bart-large-subjqa-grocery | 950812ff158c7678699f30cdc605cac27cff4873 | 2022-06-02T18:41:52.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-large-subjqa-grocery | 0 | null | transformers | 37,842 | Entry not found |
lmqg/t5-small-subjqa-restaurants | a7ac86cb11875f81faf710d44ba73ecac2ec89e0 | 2022-06-02T20:47:41.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-small-subjqa-restaurants | 0 | null | transformers | 37,843 | Entry not found |
lmqg/bart-base-subjqa-grocery | e46f0856298216b4a860ef031eafb5a5e5c242fe | 2022-06-02T18:22:27.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-base-subjqa-grocery | 0 | null | transformers | 37,844 | Entry not found |
lmqg/bart-base-subjqa-restaurants | 68369cdab20879dcbaf31c552d277f5c727c3867 | 2022-06-02T22:16:28.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-base-subjqa-restaurants | 0 | null | transformers | 37,845 | Entry not found |
lmqg/bart-base-subjqa-tripadvisor | b93f124e61a059266c8551be5ebbacae3cc7b2f8 | 2022-06-03T00:14:50.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-base-subjqa-tripadvisor | 0 | null | transformers | 37,846 | Entry not found |
lmqg/t5-base-subjqa-tripadvisor | bcba1f71a0c67150a610b7987dadd8d9ea0d8931 | 2022-06-02T23:10:03.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-base-subjqa-tripadvisor | 0 | null | transformers | 37,847 | Entry not found |
lmqg/t5-small-subjqa-books | b5e956cb7f8ca1a4ab692ec6a0359db79238b217 | 2022-06-02T12:50:00.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-small-subjqa-books | 0 | null | transformers | 37,848 | Entry not found |
lmqg/bart-large-subjqa-tripadvisor | fa31c27640d49af60465d9280bcbdd6f395b0f0c | 2022-06-03T00:33:54.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-large-subjqa-tripadvisor | 0 | null | transformers | 37,849 | Entry not found |
lmqg/t5-small-subjqa-grocery | 852b621cdc511fc0e9256e06f465e93c0411c760 | 2022-06-02T16:49:05.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-small-subjqa-grocery | 0 | null | transformers | 37,850 | Entry not found |
lmqg/bart-base-subjqa-books | b6151618d12950764d3d90cafda3305e97a2a517 | 2022-06-02T14:17:43.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-base-subjqa-books | 0 | null | transformers | 37,851 | Entry not found |
lmqg/t5-base-subjqa-grocery | 07f0e720d9f5a7e5a0100d8e5ddf288281a0c6b8 | 2022-06-02T17:14:16.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-base-subjqa-grocery | 0 | null | transformers | 37,852 | Entry not found |
lmqg/t5-base-subjqa-movies | f5941f6f7f3ec1d32f19d27f5224f8b1dd1b64f5 | 2022-06-02T19:13:30.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-base-subjqa-movies | 0 | null | transformers | 37,853 | Entry not found |
lmqg/t5-small-subjqa-electronics | 20c7817a80a9080d99478e8be70ffce442c87634 | 2022-06-02T14:53:45.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-small-subjqa-electronics | 0 | null | transformers | 37,854 | Entry not found |
lmqg/t5-small-subjqa-tripadvisor | 5d04b8b5376b2742063fee0e5527224bad456971 | 2022-06-02T22:45:32.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-small-subjqa-tripadvisor | 0 | null | transformers | 37,855 | Entry not found |
lmqg/bart-large-subjqa-movies | 72df74c24f2e08694dacf3d2e2e6abff7add6c44 | 2022-06-02T20:36:05.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-large-subjqa-movies | 0 | null | transformers | 37,856 | Entry not found |
lmqg/bart-base-subjqa-movies | 1e1363380cc00d481ddbf09b170624f26bb2fd0c | 2022-06-02T20:17:56.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-base-subjqa-movies | 0 | null | transformers | 37,857 | Entry not found |
x574chen/wav2vec2-common_voice-tr-demo | 62f61685312cf5e71be40276255b245fad4e7508 | 2022-06-05T17:14:10.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"common_voice",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | x574chen | null | x574chen/wav2vec2-common_voice-tr-demo | 0 | null | transformers | 37,858 | ---
language:
- tr
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-common_voice-tr-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-common_voice-tr-demo
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3815
- Wer: 0.3493
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.92 | 100 | 3.5559 | 1.0 |
| No log | 1.83 | 200 | 3.0161 | 0.9999 |
| No log | 2.75 | 300 | 0.8587 | 0.7443 |
| No log | 3.67 | 400 | 0.5855 | 0.6121 |
| 3.1095 | 4.59 | 500 | 0.4841 | 0.5204 |
| 3.1095 | 5.5 | 600 | 0.4533 | 0.4923 |
| 3.1095 | 6.42 | 700 | 0.4157 | 0.4342 |
| 3.1095 | 7.34 | 800 | 0.4304 | 0.4334 |
| 3.1095 | 8.26 | 900 | 0.4097 | 0.4068 |
| 0.2249 | 9.17 | 1000 | 0.4049 | 0.3881 |
| 0.2249 | 10.09 | 1100 | 0.3993 | 0.3809 |
| 0.2249 | 11.01 | 1200 | 0.3855 | 0.3782 |
| 0.2249 | 11.93 | 1300 | 0.3923 | 0.3713 |
| 0.2249 | 12.84 | 1400 | 0.3833 | 0.3591 |
| 0.1029 | 13.76 | 1500 | 0.3811 | 0.3570 |
| 0.1029 | 14.68 | 1600 | 0.3834 | 0.3499 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.12.0a0+2c916ef
- Datasets 2.2.2
- Tokenizers 0.10.3
|
huggingtweets/contextmemlab-jeremyrmanning | 22c78ab17d30fdf47bf53fe197c1d8b895cc9b02 | 2022-06-02T06:59:24.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/contextmemlab-jeremyrmanning | 0 | null | transformers | 37,859 | ---
language: en
thumbnail: http://www.huggingtweets.com/contextmemlab-jeremyrmanning/1654153159177/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1268155013882396672/Ev_5MJ-E_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/733324858621341698/iW5s1aAc_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI CYBORG π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Jeremy Manning & Context Lab</div>
<div style="text-align: center; font-size: 14px;">@contextmemlab-jeremyrmanning</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Jeremy Manning & Context Lab.
| Data | Jeremy Manning | Context Lab |
| --- | --- | --- |
| Tweets downloaded | 1635 | 206 |
| Retweets | 1093 | 44 |
| Short tweets | 88 | 1 |
| Tweets kept | 454 | 161 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1383c0di/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @contextmemlab-jeremyrmanning's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1nunflkl) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1nunflkl/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/contextmemlab-jeremyrmanning')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/paxt0n4 | 9fb6b0496a95b9e51d6843d948feca32cd23c738 | 2022-06-02T07:30:57.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/paxt0n4 | 0 | null | transformers | 37,860 | ---
language: en
thumbnail: http://www.huggingtweets.com/paxt0n4/1654155052782/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1359906890340306950/s5cXHS11_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Paxton Fitzpatrick</div>
<div style="text-align: center; font-size: 14px;">@paxt0n4</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Paxton Fitzpatrick.
| Data | Paxton Fitzpatrick |
| --- | --- |
| Tweets downloaded | 2551 |
| Retweets | 1177 |
| Short tweets | 326 |
| Tweets kept | 1048 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1x9k9uk2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @paxt0n4's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/34fd5zca) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/34fd5zca/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/paxt0n4')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
nglaura/skimformer | 15a959f372085ee51e4afe20ce9812dafe355b21 | 2022-06-02T15:37:12.000Z | [
"pytorch",
"skimformer",
"fill-mask",
"arxiv:2109.01078",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | nglaura | null | nglaura/skimformer | 0 | null | transformers | 37,861 | ---
license: apache-2.0
---
# Skimformer
A collaboration between [reciTAL](https://recital.ai/en/) & [MLIA](https://mlia.lip6.fr/) (ISIR, Sorbonne UniversitΓ©)
##Β Model description
Skimformer is a two-stage Transformer that replaces self-attention with Skim-Attention, a self-attention module that computes attention solely based on the 2D positions of tokens in the page. The model adopts a two-step approach: first, the skim-attention scores are computed once and only once using layout information alone; then, these attentions are used in every layer of a text-based Transformer encoder. For more details, please refer to our paper:
[Skim-Attention: Learning to Focus via Document Layout](https://arxiv.org/abs/2109.01078)
Laura Nguyen, Thomas Scialom, Jacopo Staiano, Benjamin Piwowarski, [EMNLP 2021](https://2021.emnlp.org/papers)
## Citation
``` latex
@article{nguyen2021skimattention,
title={Skim-Attention: Learning to Focus via Document Layout},
author={Laura Nguyen and Thomas Scialom and Jacopo Staiano and Benjamin Piwowarski},
journal={arXiv preprint arXiv:2109.01078}
year={2021},
}
``` |
huggingtweets/eurovision | 188ae8eeadf2f53d34d821abc4b60c5f9d880f21 | 2022-06-02T12:18:15.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/eurovision | 0 | null | transformers | 37,862 | ---
language: en
thumbnail: http://www.huggingtweets.com/eurovision/1654172290217/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1531770686309646338/i1LUNPuy_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Eurovision Song Contest</div>
<div style="text-align: center; font-size: 14px;">@eurovision</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Eurovision Song Contest.
| Data | Eurovision Song Contest |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 483 |
| Short tweets | 146 |
| Tweets kept | 2620 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/28rylgus/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @eurovision's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/wfx4mn2i) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/wfx4mn2i/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/eurovision')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/esfinn | bf20e894fdd35d19197d36e16c9196921336cda2 | 2022-06-02T12:35:17.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/esfinn | 0 | null | transformers | 37,863 | ---
language: en
thumbnail: http://www.huggingtweets.com/esfinn/1654173312571/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/773905129129046016/EZcRPMpd_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Emily Finn</div>
<div style="text-align: center; font-size: 14px;">@esfinn</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Emily Finn.
| Data | Emily Finn |
| --- | --- |
| Tweets downloaded | 767 |
| Retweets | 209 |
| Short tweets | 72 |
| Tweets kept | 486 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/22n1p2vw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @esfinn's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/caz2a2vq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/caz2a2vq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/esfinn')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/gaytimes-grindr | af7c10e1efdfccba41b7e857947b478ffda2a045 | 2022-06-02T12:50:16.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/gaytimes-grindr | 0 | null | transformers | 37,864 | ---
language: en
thumbnail: http://www.huggingtweets.com/gaytimes-grindr/1654174210818/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1531896348416483329/bPsiy7hP_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1456594968383041538/Ab0hl5Xl_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI CYBORG π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Grindr & GAY TIMES</div>
<div style="text-align: center; font-size: 14px;">@gaytimes-grindr</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Grindr & GAY TIMES.
| Data | Grindr | GAY TIMES |
| --- | --- | --- |
| Tweets downloaded | 3237 | 3250 |
| Retweets | 149 | 239 |
| Short tweets | 749 | 32 |
| Tweets kept | 2339 | 2979 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/10mzbkyr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gaytimes-grindr's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/k5enwplb) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/k5enwplb/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gaytimes-grindr')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/eurunuela | 6423356e88cddf05cc2b5c74f3b08ab95006ffeb | 2022-06-02T12:50:57.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/eurunuela | 0 | null | transformers | 37,865 | ---
language: en
thumbnail: http://www.huggingtweets.com/eurunuela/1654174252782/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1476203864063893505/j7Ep0Muv_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Eneko UruΓ±uela</div>
<div style="text-align: center; font-size: 14px;">@eurunuela</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Eneko UruΓ±uela.
| Data | Eneko UruΓ±uela |
| --- | --- |
| Tweets downloaded | 1267 |
| Retweets | 241 |
| Short tweets | 42 |
| Tweets kept | 984 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3fhgg7tg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @eurunuela's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2ndd7uaz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2ndd7uaz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/eurunuela')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/claregrall | db9593a8c6a7d951570ae5b5016d393e657a4ba9 | 2022-06-02T13:17:25.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/claregrall | 0 | null | transformers | 37,866 | ---
language: en
thumbnail: http://www.huggingtweets.com/claregrall/1654175841134/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1197255800114339842/9ptyNMcO_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Clare Grall</div>
<div style="text-align: center; font-size: 14px;">@claregrall</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Clare Grall.
| Data | Clare Grall |
| --- | --- |
| Tweets downloaded | 873 |
| Retweets | 176 |
| Short tweets | 51 |
| Tweets kept | 646 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3fu0nxex/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @claregrall's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2yox9655) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2yox9655/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/claregrall')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/willsavino | cc33ca7fce117c289e77166b7114af72efe2d2d7 | 2022-06-02T13:06:29.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/willsavino | 0 | null | transformers | 37,867 | ---
language: en
thumbnail: http://www.huggingtweets.com/willsavino/1654175184979/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1078115982768525317/wk6NTSE0_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Will Savino</div>
<div style="text-align: center; font-size: 14px;">@willsavino</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Will Savino.
| Data | Will Savino |
| --- | --- |
| Tweets downloaded | 3229 |
| Retweets | 355 |
| Short tweets | 244 |
| Tweets kept | 2630 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2nhwww0u/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @willsavino's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3k5ueoap) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3k5ueoap/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/willsavino')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
wvangils/DistilGPT2-Beatles-Lyrics-finetuned-newlyrics | ba697192b3f5436eeab752b1633c4dd5de2da2fa | 2022-06-17T11:20:50.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | wvangils | null | wvangils/DistilGPT2-Beatles-Lyrics-finetuned-newlyrics | 0 | null | transformers | 37,868 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: DistilGPT2-Beatles-Lyrics-finetuned-newlyrics
results: []
widget:
- text: "Last night in Kiev the"
example_title: "Kiev"
- text: "It hasn't rained in weeks"
example_title: "Rain"
---
# DistilGPT2-Beatles-Lyrics-finetuned-newlyrics
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the [Cmotions - Beatles lyrics](https://huggingface.co/datasets/cmotions/Beatles_lyrics) dataset. It will complete an input prompt with Beatles-like text.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.786 | 1.0 | 18 | 2.0410 |
| 2.5587 | 2.0 | 36 | 1.9280 |
| 2.3651 | 3.0 | 54 | 1.8829 |
| 2.2759 | 4.0 | 72 | 1.8473 |
| 2.1241 | 5.0 | 90 | 1.8237 |
| 2.1018 | 6.0 | 108 | 1.8535 |
| 1.8537 | 7.0 | 126 | 1.8497 |
| 1.7859 | 8.0 | 144 | 1.8618 |
| 1.69 | 9.0 | 162 | 1.8657 |
| 1.6481 | 10.0 | 180 | 1.8711 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/quora-reddit | 782bcfb2ef51280c4030a05b8eeb933f4672bd6f | 2022-06-03T12:09:44.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/quora-reddit | 0 | null | transformers | 37,869 | ---
language: en
thumbnail: http://www.huggingtweets.com/quora-reddit/1654258179125/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1532031893318737920/N4nwSAZv_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1333471260483801089/OtTAJXEZ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI CYBORG π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Quora & Reddit</div>
<div style="text-align: center; font-size: 14px;">@quora-reddit</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Quora & Reddit.
| Data | Quora | Reddit |
| --- | --- | --- |
| Tweets downloaded | 3244 | 3248 |
| Retweets | 181 | 331 |
| Short tweets | 22 | 392 |
| Tweets kept | 3041 | 2525 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/12sw605d/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @quora-reddit's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2g51clcs) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2g51clcs/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/quora-reddit')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
tclong/wav2vec2-base-vios-v2 | fc29afd0af8c16648e852c591a9e5c0d7c0c059c | 2022-06-06T17:45:04.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:vivos_dataset",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | tclong | null | tclong/wav2vec2-base-vios-v2 | 0 | null | transformers | 37,870 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- vivos_dataset
model-index:
- name: wav2vec2-base-vios-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-vios-v2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the vivos_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6056
- Wer: 0.2442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 7.8344 | 0.69 | 500 | 3.5012 | 1.0 |
| 3.4505 | 1.37 | 1000 | 3.4081 | 1.0 |
| 2.1426 | 2.06 | 1500 | 0.8761 | 0.6241 |
| 0.8801 | 2.74 | 2000 | 0.5476 | 0.4241 |
| 0.6453 | 3.43 | 2500 | 0.4384 | 0.3495 |
| 0.5449 | 4.12 | 3000 | 0.4055 | 0.3160 |
| 0.4862 | 4.8 | 3500 | 0.3815 | 0.3002 |
| 0.4435 | 5.49 | 4000 | 0.3525 | 0.2776 |
| 0.4205 | 6.17 | 4500 | 0.3660 | 0.2725 |
| 0.3974 | 6.86 | 5000 | 0.3386 | 0.2565 |
| 0.3758 | 7.54 | 5500 | 0.3492 | 0.2607 |
| 0.3595 | 8.23 | 6000 | 0.3391 | 0.2441 |
| 0.3438 | 8.92 | 6500 | 0.3255 | 0.2354 |
| 0.3308 | 9.6 | 7000 | 0.3379 | 0.2422 |
| 0.3265 | 10.29 | 7500 | 0.3375 | 0.2349 |
| 0.311 | 10.97 | 8000 | 0.3356 | 0.2306 |
| 0.3071 | 11.66 | 8500 | 0.3286 | 0.2249 |
| 0.2941 | 12.35 | 9000 | 0.3176 | 0.2211 |
| 0.296 | 13.03 | 9500 | 0.3268 | 0.2257 |
| 0.2852 | 13.72 | 10000 | 0.3265 | 0.2196 |
| 0.3102 | 14.4 | 10500 | 0.3390 | 0.2209 |
| 0.2974 | 15.09 | 11000 | 0.3493 | 0.2199 |
| 0.3433 | 15.78 | 11500 | 0.3687 | 0.2199 |
| 0.3526 | 16.46 | 12000 | 0.3698 | 0.2170 |
| 0.36 | 17.15 | 12500 | 0.4110 | 0.2227 |
| 0.4322 | 17.83 | 13000 | 0.4830 | 0.2290 |
| 0.4973 | 18.52 | 13500 | 0.5280 | 0.2356 |
| 0.5701 | 19.2 | 14000 | 0.5990 | 0.2370 |
| 0.6014 | 19.89 | 14500 | 0.6056 | 0.2442 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
finiteautomata/pepe | 08f0926173c7a344390e8c3363f8d9b59b99d8d0 | 2022-06-02T13:53:07.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | finiteautomata | null | finiteautomata/pepe | 0 | null | transformers | 37,871 | Entry not found |
huggingtweets/vborghesani | 2a9f827021a64002f177bd4f267b66d4f2a77035 | 2022-06-02T14:00:46.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/vborghesani | 0 | null | transformers | 37,872 | ---
language: en
thumbnail: http://www.huggingtweets.com/vborghesani/1654178225151/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1279408626877304833/28JtkdiE_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Valentina Borghesani</div>
<div style="text-align: center; font-size: 14px;">@vborghesani</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Valentina Borghesani.
| Data | Valentina Borghesani |
| --- | --- |
| Tweets downloaded | 1024 |
| Retweets | 140 |
| Short tweets | 23 |
| Tweets kept | 861 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/21epnhoj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @vborghesani's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1vf22msq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1vf22msq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/vborghesani')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/ppinheirochagas | b7c22f596eca05a22e9ce0c4a282b3637325caf1 | 2022-06-02T17:24:17.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/ppinheirochagas | 0 | null | transformers | 37,873 | ---
language: en
thumbnail: http://www.huggingtweets.com/ppinheirochagas/1654190652962/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1510853995690033153/-mRCiWB0_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Pedro Pinheiro-Chagas</div>
<div style="text-align: center; font-size: 14px;">@ppinheirochagas</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Pedro Pinheiro-Chagas.
| Data | Pedro Pinheiro-Chagas |
| --- | --- |
| Tweets downloaded | 1001 |
| Retweets | 658 |
| Short tweets | 95 |
| Tweets kept | 248 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1f73x4s5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ppinheirochagas's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/10v1i51v) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/10v1i51v/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ppinheirochagas')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/rauschermri | 7feda5e6c7717e00411660d8fa24f13d726e404c | 2022-06-02T18:12:11.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/rauschermri | 0 | null | transformers | 37,874 | ---
language: en
thumbnail: http://www.huggingtweets.com/rauschermri/1654193526819/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1504854177993744386/k8Tb-5zg_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Alexander Rauscher</div>
<div style="text-align: center; font-size: 14px;">@rauschermri</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Alexander Rauscher.
| Data | Alexander Rauscher |
| --- | --- |
| Tweets downloaded | 3245 |
| Retweets | 651 |
| Short tweets | 253 |
| Tweets kept | 2341 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/clzasreo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @rauschermri's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/e0w0wjmj) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/e0w0wjmj/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/rauschermri')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
victorlee071200/distilbert-base-uncased-finetuned-squad | 12bea89b51aee18d0315235fc9d5f2ab694ce6bd | 2022-06-02T20:55:20.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | victorlee071200 | null | victorlee071200/distilbert-base-uncased-finetuned-squad | 0 | null | transformers | 37,875 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1458
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2145 | 1.0 | 5533 | 1.1624 |
| 0.9531 | 2.0 | 11066 | 1.1257 |
| 0.7566 | 3.0 | 16599 | 1.1458 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
roshnir/mBert-finetuned-mlqa-dev-ar-hi | baa8582a3f202c5b8387fad95b0f32b74cd016a7 | 2022-06-02T20:27:21.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | roshnir | null | roshnir/mBert-finetuned-mlqa-dev-ar-hi | 0 | null | transformers | 37,876 | Entry not found |
huggingtweets/mrikasper | 055f8a83161adba5971719e61b6c7aa066ba3fab | 2022-06-02T21:40:46.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/mrikasper | 0 | null | transformers | 37,877 | ---
language: en
thumbnail: http://www.huggingtweets.com/mrikasper/1654206041092/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/914206875419332608/26FrQMV2_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Lars Kasper</div>
<div style="text-align: center; font-size: 14px;">@mrikasper</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Lars Kasper.
| Data | Lars Kasper |
| --- | --- |
| Tweets downloaded | 475 |
| Retweets | 113 |
| Short tweets | 10 |
| Tweets kept | 352 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3lbnyiin/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mrikasper's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1y754vcz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1y754vcz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mrikasper')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/the_dealersh1p | 69dea5bd8e15b623ded08201a67ccf95b97ff718 | 2022-06-03T06:04:47.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/the_dealersh1p | 0 | null | transformers | 37,878 | ---
language: en
thumbnail: http://www.huggingtweets.com/the_dealersh1p/1654236282143/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1211158441504456704/dCNSnY4k_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">γ γγdanγγ γ</div>
<div style="text-align: center; font-size: 14px;">@the_dealersh1p</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from γ γγdanγγ γ.
| Data | γ γγdanγγ γ |
| --- | --- |
| Tweets downloaded | 2645 |
| Retweets | 1314 |
| Short tweets | 234 |
| Tweets kept | 1097 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2ps3bmmz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @the_dealersh1p's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/lh2yijs1) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/lh2yijs1/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/the_dealersh1p')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/marazack26 | 3bde19e77d4eb2bb6794c76b00d2b1fb1239e61d | 2022-06-02T22:56:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/marazack26 | 0 | null | transformers | 37,879 | ---
language: en
thumbnail: http://www.huggingtweets.com/marazack26/1654210546142/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1239803946643927041/AHuDYsfL_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Mohammed Abd Al-Razack / Ω
ΨΩ
Ψ― ΨΉΨ¨Ψ― Ψ§ΩΨ±Ψ²Ψ§Ω</div>
<div style="text-align: center; font-size: 14px;">@marazack26</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Mohammed Abd Al-Razack / Ω
ΨΩ
Ψ― ΨΉΨ¨Ψ― Ψ§ΩΨ±Ψ²Ψ§Ω.
| Data | Mohammed Abd Al-Razack / Ω
ΨΩ
Ψ― ΨΉΨ¨Ψ― Ψ§ΩΨ±Ψ²Ψ§Ω |
| --- | --- |
| Tweets downloaded | 3060 |
| Retweets | 1619 |
| Short tweets | 167 |
| Tweets kept | 1274 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/264mzr04/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @marazack26's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3p7448r6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3p7448r6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/marazack26')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/joanacspinto | d2a93f6890c1ee8fc629654399d8cfa68c81098b | 2022-06-02T23:02:58.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/joanacspinto | 0 | null | transformers | 37,880 | ---
language: en
thumbnail: http://www.huggingtweets.com/joanacspinto/1654210973627/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1274320386881183747/NJTJm38e_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Dr Joana Pinto</div>
<div style="text-align: center; font-size: 14px;">@joanacspinto</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Dr Joana Pinto.
| Data | Dr Joana Pinto |
| --- | --- |
| Tweets downloaded | 177 |
| Retweets | 50 |
| Short tweets | 7 |
| Tweets kept | 120 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/11osdq6n/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @joanacspinto's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/18w57bkp) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/18w57bkp/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/joanacspinto')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
meetyildiz/TurQA-xlm-roberta-base-finetuned-toqad | f7a716dd4d6fca32b8be1f40ee2a6392ee0f2b6d | 2022-06-02T23:17:15.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | meetyildiz | null | meetyildiz/TurQA-xlm-roberta-base-finetuned-toqad | 0 | null | transformers | 37,881 | Entry not found |
victorlee071200/distilroberta-base-finetuned-squad | 2bb429422b5af8cafcbbcfddd13e9cde793b27a8 | 2022-06-09T04:57:17.000Z | [
"pytorch",
"roberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | victorlee071200 | null | victorlee071200/distilroberta-base-finetuned-squad | 0 | null | transformers | 37,882 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilroberta-base-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-squad
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0927 | 1.0 | 5536 | 1.0290 |
| 0.87 | 2.0 | 11072 | 0.9683 |
| 0.7335 | 3.0 | 16608 | 1.0014 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
meetyildiz/TurQA-convbert-base-turkish-cased-finetuned-toqad-aug | fa0eb23c0579d7c62abddef1ad4d61e23808a560 | 2022-06-02T23:32:46.000Z | [
"pytorch",
"convbert",
"feature-extraction",
"transformers"
] | feature-extraction | false | meetyildiz | null | meetyildiz/TurQA-convbert-base-turkish-cased-finetuned-toqad-aug | 0 | null | transformers | 37,883 | Entry not found |
meetyildiz/TurQA-bert-base-turkish-cased-finetuned-toqad-aug | fc3a4258083cc90b852099fdda8118b088a3a991 | 2022-06-02T23:39:02.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | meetyildiz | null | meetyildiz/TurQA-bert-base-turkish-cased-finetuned-toqad-aug | 0 | null | transformers | 37,884 | Entry not found |
meetyildiz/TurQA-electra-base-turkish-cased-discriminator-finetuned-toqad-aug | 150ba04e04cdb5d4dd7691108149a3300aeec183 | 2022-06-02T23:45:22.000Z | [
"pytorch",
"electra",
"feature-extraction",
"transformers"
] | feature-extraction | false | meetyildiz | null | meetyildiz/TurQA-electra-base-turkish-cased-discriminator-finetuned-toqad-aug | 0 | null | transformers | 37,885 | Entry not found |
meetyildiz/TurQA-xlm-roberta-base-finetuned-toqad-aug | 70d357ccc95c8973f030436b26626131440199fd | 2022-06-02T23:52:05.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | meetyildiz | null | meetyildiz/TurQA-xlm-roberta-base-finetuned-toqad-aug | 0 | null | transformers | 37,886 | Entry not found |
meetyildiz/TurQA-bert-base-turkish-128k-cased-finetuned-toqad-aug | f8a663906bb2f6214702a37e4b8fd9f351c201e8 | 2022-06-03T00:07:17.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | meetyildiz | null | meetyildiz/TurQA-bert-base-turkish-128k-cased-finetuned-toqad-aug | 0 | null | transformers | 37,887 | Entry not found |
johnny9604/pbl_electra | 46c53265b2bedb70fd0a6bc4582586b853ad9cfa | 2022-06-03T07:56:44.000Z | [
"pytorch",
"electra",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | johnny9604 | null | johnny9604/pbl_electra | 0 | null | transformers | 37,888 | Entry not found |
sriiikar/wav2vec2-hindi-bhoj-3 | c8df3bfe3bb687852e6c444f5cb35a0ab36a8df5 | 2022-06-03T07:11:46.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | sriiikar | null | sriiikar/wav2vec2-hindi-bhoj-3 | 0 | null | transformers | 37,889 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-hindi-bhoj-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-hindi-bhoj-3
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.7033
- Wer: 1.1477
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 8.6136 | 6.45 | 400 | 3.6017 | 1.0 |
| 2.6692 | 12.9 | 800 | 4.5408 | 1.0872 |
| 0.5639 | 19.35 | 1200 | 5.2302 | 1.2282 |
| 0.2296 | 25.8 | 1600 | 5.3323 | 1.0872 |
| 0.1496 | 32.26 | 2000 | 5.7219 | 1.1342 |
| 0.1098 | 38.7 | 2400 | 5.7033 | 1.1477 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.3.dev0
- Tokenizers 0.12.1
|
wogkr810/mnm | d82275332a586d98f9b729f266c8871f757f36d1 | 2022-06-05T05:03:24.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | wogkr810 | null | wogkr810/mnm | 0 | null | transformers | 37,890 | # Model
---
## [KLUE MRC λν](https://github.com/boostcampaitech3/level2-mrc-level2-nlp-09)μμ μ¬μ©ν Reader SOTAλͺ¨λΈμ μ¬μ©νμ¬ νμΈνλμ μ§ννμ΅λλ€.
- λ°μ΄ν°μ
: νκΉ
μ΄ν ꡬμΆν λ°μ΄ν°μ
μμ μ μ²λ¦¬ λ° Augmentation μ μ©
- [Huggingface : MRC Reader SOTA λͺ¨λΈ](https://huggingface.co/Nonegom/roberta_finetune_twice)
- [Github Issue : MRC Redaer SOTA λͺ¨λΈ μ€λͺ
](https://github.com/boostcampaitech3/level2-mrc-level2-nlp-09/issues/38)
|
huggingtweets/mundodeportivo | c2f817a2872f38e094190f0f14b0413e01662823 | 2022-06-03T09:09:48.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/mundodeportivo | 0 | null | transformers | 37,891 | ---
language: en
thumbnail: http://www.huggingtweets.com/mundodeportivo/1654247301367/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1277369340275437570/R-AXlYNT_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Mundo Deportivo</div>
<div style="text-align: center; font-size: 14px;">@mundodeportivo</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Mundo Deportivo.
| Data | Mundo Deportivo |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 195 |
| Short tweets | 26 |
| Tweets kept | 3029 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/17m7lnrt/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mundodeportivo's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2mndpk3u) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2mndpk3u/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mundodeportivo')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
kimcando/ko-paraKQC-demo2 | 1fd0fdeb8c0094b40a312488cdaa5d53befdcc22 | 2022-06-03T09:28:17.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | kimcando | null | kimcando/ko-paraKQC-demo2 | 0 | null | sentence-transformers | 37,892 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# kimcando/ko-paraKQC-demo2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('kimcando/ko-paraKQC-demo2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('kimcando/ko-paraKQC-demo2')
model = AutoModel.from_pretrained('kimcando/ko-paraKQC-demo2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=kimcando/ko-paraKQC-demo2)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 475 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 190,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
kleinay/qanom-seq2seq-model-order-invariant | 6cd693fb61c167062b61ff2bf5dd125619b2e706 | 2022-06-03T09:50:24.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | kleinay | null | kleinay/qanom-seq2seq-model-order-invariant | 0 | null | transformers | 37,893 | Entry not found |
jppaolim/v47_Move2PT | 648f9d0ed4c3d7d8fdc7a98dab1123adbd3f92fc | 2022-06-03T12:23:52.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | jppaolim | null | jppaolim/v47_Move2PT | 0 | null | transformers | 37,894 | # My Story model
{'top_p': 0.9, 'top_k': 50, 'temperature': 1, 'repetition_penalty': 1}
Arthur goes to the beach. Arthur wanted to go to the beach with his friends. Arthur wasn't a big fan of the beach. He asked his friend Steve to go to the beach with him. Steve brought a box of chips with Arthur's and him. Arthur and Steve had a great time at the beach.
Arthur goes to the beach. Arthur loved to go to the beach. One day, his family decided to go to the beach. Arthur decided he would go with his family and swim. Arthur loved swimming so much that he asked his mother if he could go. His mother told him that she already had a place to stay.
Arthur goes to the beach. Arthur was at the beach with friends. Arthur wanted to go to the beach. Arthur's friends thought that the beach was a bad place to go. However, Arthur decided to take his friends to the beach. The boys went to the beach.
Arthur goes to the beach. Arthur was on a boat with his family. Arthur wanted to swim but he was afraid it would be stormy. The weatherman thought he may need to take a dip in the sand. Arthur reluctantly took a dip in the shallow. Arthur was glad to get back on the boat after all.
Arthur goes to the beach. Arthur is out fishing. While fishing with his dad at the beach he hits a pothole. Arthur feels a painful pain. His dad picks him up and takes him to the doctor. Arthur is diagnosed with a broken leg.
{'top_p': 0.9, 'top_k': 50, 'temperature': 1, 'repetition_penalty': 1.05}
Arthur goes to the beach. Arthur had never gone to the beach before. He went to the beach. While there he swam a few laps. He then went home and forgot about all of his work. Arthur felt better after that.
Arthur goes to the beach. Arthur is out in the ocean with his friend. Arthur is walking along the shore when he sees something bad. Arthur starts to swim as fast as he can. He begins to feel bad and runs into the water. Arthur goes and rests on the shore in pain.
Arthur goes to the beach. Arthur was having a great day on the beach. He was having so much fun! He wanted to go to the beach. Arthur got in his car and drove to the beach. Arthur was happy he had spent the day on the beach.
Arthur goes to the beach. Arthur wanted to see the ocean. He didn't have a boat. Arthur decided to take a boat to the ocean. Arthur went on the water with his boat. Arthur and his friends went on the water.
Arthur goes to the beach. Arthur had never been to the beach before. He had never been to one before so he went with his friends. Arthur and his friends met up at a local hotel. Arthur got a towel and started walking toward the beach. Arthur was happy that he had finally visited the beach.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.8, 'repetition_penalty': 1.1}
Arthur goes to the beach. Arthur is out on a relaxing day at the beach with his family. Arthur is enjoying the relaxing sun. Arthur spots an injured whale. Arthur takes the whale to the shore. Arthur and his family go home and relax.
Arthur goes to the beach. Arthur is going to the beach. He wants to go swimming. Arthur gets a towel. Arthur goes swimming. Arthur swims the whole day.
Arthur goes to the beach. Arthur wanted to go swimming with his friends. They decided to go to the beach. Arthur's friends told him to go swimming. Arthur went swimming but fell and skinned his face! He had a terrible day at the beach!
Arthur goes to the beach. Arthur had never been on a beach before. He decided to take a chance and decided to go to the beach. Arthur went to the beach and saw many beautiful waves. He had a great time at the beach. Arthur decided that he would try another beach day.
Arthur goes to the beach. Arthur was swimming with his friends. He wanted to go swimming but had no money. He decided to get some food from the beach. Arthur brought his friend along with him. Arthur spent all of his time in the ocean.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.6, 'repetition_penalty': 1.15}
Arthur goes to the beach. Arthur wanted to go on a vacation. He decided to go to the beach. At first he was very nervous. However, he enjoyed the water and the ocean. Arthur was glad he went on such a relaxing vacation.
Arthur goes to the beach. Arthur was at the beach with his friends. He decided he wanted a fun day on the water. Arthur decided to go for a swim in the ocean. Arthur swam in the ocean for an hour. Arthur had a great time at the beach with his friends.
Arthur goes to the beach. Arthur was out on a boat with his friends. They were on their way to the beach. Arthur saw that there was a huge jellyfish in the water. He swam to save it and catch it. His friends loved to eat jellyfish!
Arthur goes to the beach. Arthur was going on a long trip to the beach. He had never been before. He decided to go with his friends. Arthur and his friends went to the beach. Arthur was happy to have a new adventure.
Arthur goes to the beach. Arthur was going on a vacation. He went out to the water. Arthur was swimming and relaxing in the ocean. Arthur felt really cool. Arthur decided he would go back to the beach.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.4, 'repetition_penalty': 1.2}
Arthur goes to the beach. Arthur is going to go to the beach. He has a lot of fun at the beach. He gets in his car and drives to the beach. He spends a few hours at the beach. Arthur is happy he went to the beach.
Arthur goes to the beach. Arthur was going on a vacation with his family. He decided he wanted to go to the beach. Arthur and his family packed up their bags and headed out. Arthur took a long walk down the beach. Afterwards, Arthur and his family went for a swim in the ocean.
Arthur goes to the beach. Arthur is a young boy who loves going on vacation. He decides he wants to go on a trip to the ocean. Arthur takes his friend Tom with him to the beach. Tom and Arthur spend the whole day swimming in the water. Arthur is happy that he went on a vacation to the ocean.
Arthur goes to the beach. Arthur is out on a boat. He has been fishing for hours. Arthur feels tired and cannot sleep. He decides to go to the beach. Arthur enjoys his day at the beach.
Arthur goes to the beach. Arthur was on a trip with his family. He decided to go to the beach. Arthur was very excited. Arthur went out and bought a surfboard. Arthur was happy he went to the beach.
|
roshnir/xlmr-base-trained-squadv2 | aafdfd9407133b0685d111db39ed3bcaa38651d9 | 2022-06-03T13:20:24.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | roshnir | null | roshnir/xlmr-base-trained-squadv2 | 0 | null | transformers | 37,895 | Entry not found |
huggingtweets/washirerpadvice | 0a492396b809704404760775ae978e910b1143e4 | 2022-06-03T13:29:32.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/washirerpadvice | 0 | null | transformers | 37,896 | ---
language: en
thumbnail: http://www.huggingtweets.com/washirerpadvice/1654262967962/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1381256890542387204/zaT8DfFD_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Washire RP Tips</div>
<div style="text-align: center; font-size: 14px;">@washirerpadvice</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Washire RP Tips.
| Data | Washire RP Tips |
| --- | --- |
| Tweets downloaded | 243 |
| Retweets | 4 |
| Short tweets | 5 |
| Tweets kept | 234 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/gq82nlvl/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @washirerpadvice's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/325ay6n9) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/325ay6n9/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/washirerpadvice')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
kaouther/distilbert-base-uncased-finetuned-squad | c4bbeb925cad3f85b53cac57ca7f303800a6dece | 2022-06-03T15:29:20.000Z | [
"pytorch",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | kaouther | null | kaouther/distilbert-base-uncased-finetuned-squad | 0 | null | transformers | 37,897 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1703
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2166 | 1.0 | 5533 | 1.1583 |
| 0.9572 | 2.0 | 11066 | 1.1387 |
| 0.7377 | 3.0 | 16599 | 1.1703 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/calamitiddy | 42f185a6caef9ceca6e074de16526963cf3882f3 | 2022-06-03T14:07:14.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/calamitiddy | 0 | null | transformers | 37,898 | ---
language: en
thumbnail: http://www.huggingtweets.com/calamitiddy/1654265229643/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1532055379688841216/qJTjpsoB_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">lauren rhiannon (nail cleanup duty)</div>
<div style="text-align: center; font-size: 14px;">@calamitiddy</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from lauren rhiannon (nail cleanup duty).
| Data | lauren rhiannon (nail cleanup duty) |
| --- | --- |
| Tweets downloaded | 3204 |
| Retweets | 1654 |
| Short tweets | 164 |
| Tweets kept | 1386 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3bl5mrs4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @calamitiddy's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/fnxzf4e2) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/fnxzf4e2/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/calamitiddy')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
jppaolim/v49Neo | 4cf262ad19d8aac85a4194494dfde7615d24d245 | 2022-06-03T16:34:46.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | jppaolim | null | jppaolim/v49Neo | 0 | null | transformers | 37,899 | # My Story model
{'top_p': 0.9, 'top_k': 50, 'temperature': 1, 'repetition_penalty': 1}
Arthur goes to the beach. Arthur was bored today. He took a vacation to the beach. The beach was very crowded. Arthur finally enjoyed the beach for the beach. He had so much fun he decided to take his vacation there.
Arthur goes to the beach. Arthur was walking down the street one day and heard a loud boom. A huge shark had been spotted and was heading towards him! He ran to the beach and immediately jumped in the water. He swam to shore with his surfboard and his surf trunks. After five minutes of not paying attention, he got out of the water.
Arthur goes to the beach. Arthur always loved going to the beach. His favorite thing to do in the morning was go to the beach. He decided he wanted to go to the beach, not too long. Arthur packed up his backpack and headed towards the beach. He started to enjoy himself as he was going to the beach, he loved it.
Arthur goes to the beach. Arthur had always loved going to the beach. His friend told him to take the bus. Arthur forgot to bring his wallet. He was disappointed to see that his friend was gone. Arthur decided to leave the beach without taking the bus.
Arthur goes to the beach. Arthur wanted to visit the beach but his parents didn't take him. His parents thought that his parents should take him. They bought him a beach chair and took him to the beach. He had a great time, but the beach wasn't too bad. Arthur was very disappointed to see no sand at all!
{'top_p': 0.9, 'top_k': 50, 'temperature': 1, 'repetition_penalty': 1.05}
Arthur goes to the beach. Arthur is in the ocean. He swims for an hour. He feels great. He goes home. He goes swimming again.
Arthur goes to the beach. Arthur was on vacation with his family He had a very nice day at the beach. As he was driving to the beach he saw a beautiful view. He quickly started to relax as he got closer to the beach. It turned out that he was sitting down at the beach by his family.
Arthur goes to the beach. Arthur is always very worried about it. He has always been afraid of going to the beach. One day he has no idea what's going to happen. He decides to take a trip. He cannot believe he is going to the beach.
Arthur goes to the beach. Arthur wanted to learn how to surf. So he took out his surf equipment. He put his surf equipment on. He set his surfboard up and put it on the beach. Arthur had a great time surfing!
Arthur goes to the beach. Arthur loved the outdoors. He wanted to go in the water. He was very bored one day. Arthur was going to the beach. He spent the whole day swimming and sunbathing.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.8, 'repetition_penalty': 1.1}
Arthur goes to the beach. Arthur was going to the beach. He went to the beach and swam. He went to the beach and swam in the water. He fell in the water and was wet. Arthur never went to the beach again.
Arthur goes to the beach. Arthur is bored. He heads to the beach. Arthur sits down on the sand. He runs to the beach. Arthur swam in the water.
Arthur goes to the beach. Arthur was on vacation. He decided to go to the beach. He went to the beach and played on the sand. He felt very hot and cold. Arthur spent the entire day at the beach.
Arthur goes to the beach. Arthur was very excited to go to the beach with his friends. His friends were already at the beach. He was going to be at the beach on his birthday. He got all his friends together and had a great time. He was glad he had a great time and decided to go home.
Arthur goes to the beach. Arthur was walking down the street. He was heading to the beach. He was going to swim with his friends. They were going to take him to the water. Arthur had a great time swimming.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.6, 'repetition_penalty': 1.15}
Arthur goes to the beach. Arthur is very bored. He spends all day sitting on the sand. He decides to go to the beach. He spends all day swimming. Arthur is happy he went to the beach.
Arthur goes to the beach. Arthur is walking down the street. He sees a big wave. He runs to the side of the road. He trips and falls in the water. Arthur is shaken up by the wave.
Arthur goes to the beach. Arthur is on his way to the beach. He has never been in the beach before. He decides to go for a walk. While walking he falls in the water. Arthur is soaked and had to go home.
Arthur goes to the beach. Arthur was a little boy. He loved to surf, but he didn't know how to swim. His mom took him to the beach. He swam in the water and got very cold. Arthur spent all day in the sand and had a good time.
Arthur goes to the beach. Arthur was very bored. He was in his car. He drove to the beach. He went to the beach. He went to the beach and played with the beach.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.4, 'repetition_penalty': 1.2}
Arthur goes to the beach. Arthur was going to the beach with his family. He was going to take a nice walk on the sand. He was going to take a nice long stroll. He saw a huge wave and decided to go for it. He had a great time on the beach.
Arthur goes to the beach. Arthur is going to the beach with his friends. He is going to take a few hours to get there. He is going to go to the beach and surf. He is going to surf for the first time. He is excited to go to the beach and surf.
Arthur goes to the beach. Arthur is going to the beach. He is going to swim in the water. He is going to go for a quick walk. Arthur is not able to walk. Arthur is late for his appointment.
Arthur goes to the beach. Arthur is going to the beach. He is going to go swimming. Arthur is going to go swimming with his friends. He is going to swim with his friends. Arthur is very excited for the beach trip.
Arthur goes to the beach. Arthur is a very good swimmer. He has always been very careful with his swimming. One day, he decides to go to the beach. While at the beach, he swam for hours. Finally, he was able to get to the beach safely.
|