modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
roshnir/mBert-finetuned-mlqa-dev-de-hi | fd81cc3699ac87b51efb2c34b3f489b1497657e6 | 2022-06-03T18:14:09.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | roshnir | null | roshnir/mBert-finetuned-mlqa-dev-de-hi | 0 | null | transformers | 37,900 | Entry not found |
roshnir/mBert-finetuned-mlqa-dev-es-hi | 5cf515ebee35b6111f3416da05259537374051dc | 2022-06-03T19:21:01.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | roshnir | null | roshnir/mBert-finetuned-mlqa-dev-es-hi | 0 | null | transformers | 37,901 | Entry not found |
jgriffi/xlm-roberta-base-finetuned-panx-de | 01bc1bcbae5017160ce7489c7e59f6d1d3bd9b83 | 2022-06-03T22:39:59.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | jgriffi | null | jgriffi/xlm-roberta-base-finetuned-panx-de | 0 | null | transformers | 37,902 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8646153846153846
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1496
- F1: 0.8646
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2461 | 1.0 | 1049 | 0.1710 | 0.8351 |
| 0.1314 | 2.0 | 2098 | 0.1470 | 0.8439 |
| 0.0794 | 3.0 | 3147 | 0.1496 | 0.8646 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
huggingtweets/ww_bokudyo | 781c6a93e0e5954cab1360d19083d0010d41d1c5 | 2022-06-04T01:05:21.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/ww_bokudyo | 0 | null | transformers | 37,903 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1527089805955301377/vNsxxIZ5_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">✨wuwu🌟</div>
<div style="text-align: center; font-size: 14px;">@ww_bokudyo</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ✨wuwu🌟.
| Data | ✨wuwu🌟 |
| --- | --- |
| Tweets downloaded | 785 |
| Retweets | 172 |
| Short tweets | 274 |
| Tweets kept | 339 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1hf6kghs/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ww_bokudyo's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3hbh0tk2) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3hbh0tk2/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ww_bokudyo')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
mesolitica/pretrained-wav2vec2-base-mixed | e9bf26e60b4e609c2a670ada3a7316874842d620 | 2022-06-05T18:52:05.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"pretraining",
"transformers",
"generated_from_keras_callback",
"model-index"
] | null | false | mesolitica | null | mesolitica/pretrained-wav2vec2-base-mixed | 0 | null | transformers | 37,904 | ---
tags:
- generated_from_keras_callback
model-index:
- name: pretrained-wav2vec2-base-mixed
results: []
---
# pretrained-wav2vec2-base-mixed
Pretrained Wav2Vec2 BASE size on https://github.com/huseinzol05/malaya-speech/tree/master/data/mixed-stt, also included Tensorboard files in this repository.
This model was pretrained on 3 languages,
1. Malay
2. Singlish
3. Mandarin
**This model trained on a single RTX 3090 Ti 24GB VRAM, provided by https://mesolitica.com/**. |
roshnir/xlmr-finetuned-mlqa-dev-en | 22ae6e0eace67d52824b73ac03f03177cae1900a | 2022-06-04T08:04:00.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | roshnir | null | roshnir/xlmr-finetuned-mlqa-dev-en | 0 | null | transformers | 37,905 | Entry not found |
mezes/my_awsome_model | 9c8142cc1654896becdb123d109ed70ddea316fb | 2022-06-04T12:07:59.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mezes | null | mezes/my_awsome_model | 0 | null | transformers | 37,906 | Entry not found |
mezes/my_awsome_model_epoch_3 | 5b20a17dad50a38e916bd8f5aee1657738bb5992 | 2022-06-04T11:14:15.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mezes | null | mezes/my_awsome_model_epoch_3 | 0 | null | transformers | 37,907 | Entry not found |
jmilic/roberta-baseline-3 | 02d6abda8454e05cff09c4caeace0df30888a150 | 2022-06-04T16:26:29.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | jmilic | null | jmilic/roberta-baseline-3 | 0 | null | transformers | 37,908 | Entry not found |
kimcando/sbert-kornli-knoSTS-trained | 10370e399b6eed79b907a17f3f7814d76715593d | 2022-06-05T04:19:20.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | kimcando | null | kimcando/sbert-kornli-knoSTS-trained | 0 | null | sentence-transformers | 37,909 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# kimcando/sbert-kornli-knoSTS-trained
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('kimcando/sbert-kornli-knoSTS-trained')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('kimcando/sbert-kornli-knoSTS-trained')
model = AutoModel.from_pretrained('kimcando/sbert-kornli-knoSTS-trained')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=kimcando/sbert-kornli-knoSTS-trained)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 180 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 72,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
sriiikar/wav2vec2-hbtest-2 | 4565886b93e2be560f0d2f2e987822ffd4ac8159 | 2022-06-05T12:50:35.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | sriiikar | null | sriiikar/wav2vec2-hbtest-2 | 0 | null | transformers | 37,910 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-hbtest-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-hbtest-2
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.9927
- Wer: 1.1562
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.6105 | 6.41 | 1000 | 4.9969 | 1.2600 |
| 0.3723 | 12.82 | 2000 | 5.1370 | 1.1185 |
| 0.1537 | 19.23 | 3000 | 5.5541 | 1.1419 |
| 0.0992 | 25.64 | 4000 | 5.9309 | 1.1269 |
| 0.0722 | 32.05 | 5000 | 5.9545 | 1.1628 |
| 0.0593 | 38.46 | 6000 | 5.9927 | 1.1562 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.3.dev0
- Tokenizers 0.12.1
|
joaomsimoes/bertpt-portuguese-portugal | 50ec5224d235cc6e1eda1c27dcfa2d441f96eae2 | 2022-06-05T07:25:51.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | joaomsimoes | null | joaomsimoes/bertpt-portuguese-portugal | 0 | null | transformers | 37,911 | # BERTpt
Pretrained model on Portuguese (Portugal) language using a masked language modeling (MLM) objective. [Notebook](https://colab.research.google.com/drive/1OaSDl7oVrbg2tYrT24xWPWxAyKmu4cNp?usp=sharing)
## Training data
Scrapped data from diferent portugues websites, blogs and news channels. Around 2Gb of data.
## Limitations and Bias
```
>>> from transformers import pipeline
>>> fill_mask= pipeline('fill-mask', model='BERTpt')
>>> unmasker("2020 foi um ano [MASK].")
[{'sequence': '[CLS] 2020 foi um ano dificil. [SEP]',
'score': 0.146935 ,
'token': 7591,
'token_str': 'dificil'},
{'sequence': '[CLS] 2020 foi um ano historico. [SEP]',
'score': 0.101181,
'token': 9902,
'token_str': 'historico'},
{'sequence': '[CLS] 2020 foi um ano terrivel. [SEP]',
'score': 0.080123,
'token': 19675,
'token_str': 'terrivel'},
{'sequence': '[CLS] 2020 foi um ano especial. [SEP]',
'score': 0.034216,
'token': 6835,
'token_str': 'especial'},
{'sequence': '[CLS] 2020 foi um ano complicado. [SEP]',
'score': 0.028791,
'token': 12082,
'token_str': 'complicado'}]
>>> unmasker("O FCPorto é melhor que o [MASK].")
[{'sequence': '[CLS] O FCPorto é melhor que o benfica. [SEP]',
'score': 0.608609,
'token': 7709,
'token_str': 'benfica'},
{'sequence': '[CLS] O FCPorto é melhor que o sporting. [SEP]',
'score': 0.188474,
'token': 7935,
'token_str': 'sporting'},
{'sequence': '[CLS] O FCPorto é melhor que o atletico. [SEP]',
'score': 0.023601,
'token': 16116,
'token_str': 'atletico'},
{'sequence': '[CLS] O FCPorto é melhor que o boavista. [SEP]',
'score': 0.010015,
'token': 16116,
'token_str': 'boavista'},
{'sequence': '[CLS] O FCPorto é melhor que o barcelona. [SEP]',
'score': 0.009242,
'token': 10609,
'token_str': 'barcelona'}]
>>> unmasker("[MASK] é uma boa linguagem de programacao")
[{'sequence': '[CLS] python é uma boa linguagem de programacao [SEP]',
'score': 0.155832,
'token': 27384,
'token_str': 'python'},
{'sequence': '[CLS] java é uma boa linguagem de programacao [SEP]',
'score': 0.152056,
'token': 14348,
'token_str': 'java'},
{'sequence': '[CLS] programacao é uma boa linguagem de programacao [SEP]',
'score': 0.106369,
'token': 11304,
'token_str': 'programacao'},
{'sequence': '[CLS] isto é uma boa linguagem de programacao [SEP]',
'score': 0.056731,
'token': 6267,
'token_str': 'isto'},
{'sequence': '[CLS] linguagem é uma boa linguagem de programacao [SEP]',
'score': 0.044161,
'token': 13206,
'token_str': 'linguagem'}]
>>> unmasker("Eu quero uma [MASK] melhor.")
[{'sequence': '[CLS] Eu quero uma vida melhor. [SEP]',
'score': 0.138783,
'token': 6503,
'token_str': 'vida'},
{'sequence': '[CLS] Eu quero uma experiencia melhor. [SEP]',
'score': 0.083636,
'token': 7479,
'token_str': 'experiencia'},
{'sequence': '[CLS] Eu quero uma internet melhor. [SEP]',
'score': 0.059155,
'token': 7051,
'token_str': 'internet'},
{'sequence': '[CLS] Eu quero uma coisa melhor. [SEP]',
'score': 0.059155,
'token': 6645,
'token_str': 'coisa'},
{'sequence': '[CLS] Eu quero uma plataforma melhor. [SEP]',
'score': 0.044105,
'token': 7834,
'token_str': 'plataforma'}]
``` |
olpa/xlm-roberta-base-finetuned-panx-de-fr | 33ce3b5bbd01e246070bcd0c810c7eb5315b87ed | 2022-06-05T08:20:31.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | olpa | null | olpa/xlm-roberta-base-finetuned-panx-de-fr | 0 | null | transformers | 37,912 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1643
- F1: 0.8626
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2891 | 1.0 | 715 | 0.1780 | 0.8288 |
| 0.1472 | 2.0 | 1430 | 0.1633 | 0.8488 |
| 0.0948 | 3.0 | 2145 | 0.1643 | 0.8626 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
renjithks/layoutlmv2-cord-ner | 2efb0141b85b63a0ef6cf24ca28fe15d2b63bf73 | 2022-06-05T09:29:55.000Z | [
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"transformers",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | renjithks | null | renjithks/layoutlmv2-cord-ner | 0 | null | transformers | 37,913 | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv2-cord-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-cord-ner
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0952
- Precision: 0.9639
- Recall: 0.9741
- F1: 0.9690
- Accuracy: 0.9911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 113 | 0.5962 | 0.8714 | 0.8973 | 0.8842 | 0.9405 |
| No log | 2.0 | 226 | 0.4064 | 0.8713 | 0.9098 | 0.8901 | 0.9511 |
| No log | 3.0 | 339 | 0.2687 | 0.9314 | 0.9386 | 0.9350 | 0.9754 |
| No log | 4.0 | 452 | 0.2007 | 0.9355 | 0.9472 | 0.9413 | 0.9792 |
| 0.4677 | 5.0 | 565 | 0.1625 | 0.9497 | 0.9597 | 0.9547 | 0.9834 |
| 0.4677 | 6.0 | 678 | 0.1326 | 0.9526 | 0.9645 | 0.9585 | 0.9868 |
| 0.4677 | 7.0 | 791 | 0.1212 | 0.9508 | 0.9645 | 0.9576 | 0.9851 |
| 0.4677 | 8.0 | 904 | 0.1019 | 0.9675 | 0.9712 | 0.9693 | 0.9911 |
| 0.1131 | 9.0 | 1017 | 0.1029 | 0.9545 | 0.9664 | 0.9604 | 0.9881 |
| 0.1131 | 10.0 | 1130 | 0.0952 | 0.9639 | 0.9741 | 0.9690 | 0.9911 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
roshnir/xlmr-finetuned-mlqa-dev-de-hi | b3e271fbda230a42d421330142143a81f10e5d6e | 2022-06-05T12:42:56.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | roshnir | null | roshnir/xlmr-finetuned-mlqa-dev-de-hi | 0 | null | transformers | 37,914 | Entry not found |
meetyildiz/M-TurQA-bert-base-turkish-cased-finetuned-toqad | 44db7309ad4f3b3fe170d444b05de43efa3e578f | 2022-06-05T13:35:12.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | meetyildiz | null | meetyildiz/M-TurQA-bert-base-turkish-cased-finetuned-toqad | 0 | null | transformers | 37,915 | Entry not found |
meetyildiz/M-TurQA-bert-base-turkish-128k-cased-finetuned-toqad | a5774603ec477f4dff8e7775b7e97d107a3ebdbe | 2022-06-05T14:06:48.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | meetyildiz | null | meetyildiz/M-TurQA-bert-base-turkish-128k-cased-finetuned-toqad | 0 | null | transformers | 37,916 | Entry not found |
huggingtweets/philwornath | a8cad903a88ac16fc834484ccd1761e7455bc14e | 2022-06-05T14:13:21.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/philwornath | 0 | null | transformers | 37,917 | ---
language: en
thumbnail: http://www.huggingtweets.com/philwornath/1654438397344/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1496963787655716869/MJrzMo_D_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Phil Wornath 🇪🇺</div>
<div style="text-align: center; font-size: 14px;">@philwornath</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Phil Wornath 🇪🇺.
| Data | Phil Wornath 🇪🇺 |
| --- | --- |
| Tweets downloaded | 1435 |
| Retweets | 280 |
| Short tweets | 142 |
| Tweets kept | 1013 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1dbqyh6j/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @philwornath's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2f9pcn01) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2f9pcn01/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/philwornath')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
meetyildiz/M-TurQA-xlm-roberta-base-finetuned-toqad | 86339f7799fcf5b0a1f59410777f9e3d189c47ee | 2022-06-05T14:21:32.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | meetyildiz | null | meetyildiz/M-TurQA-xlm-roberta-base-finetuned-toqad | 0 | null | transformers | 37,918 | Entry not found |
meetyildiz/M-TurQA-convbert-base-turkish-cased-finetuned-toqad-aug | 0e6fe826965c7fd6d928a3be42ca60331c1b5170 | 2022-06-05T14:43:05.000Z | [
"pytorch",
"convbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | meetyildiz | null | meetyildiz/M-TurQA-convbert-base-turkish-cased-finetuned-toqad-aug | 0 | null | transformers | 37,919 | Entry not found |
meetyildiz/M-TurQA-electra-base-turkish-cased-discriminator-finetuned-toqad-aug | 351e862feb8510f344b32114deb7c827d4779043 | 2022-06-05T15:00:30.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | meetyildiz | null | meetyildiz/M-TurQA-electra-base-turkish-cased-discriminator-finetuned-toqad-aug | 0 | null | transformers | 37,920 | Entry not found |
jppaolim/v54_Large_AdaMW | 7a3d17e22fee9d75e02e70df435c890e2addfd43 | 2022-06-05T19:38:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | jppaolim | null | jppaolim/v54_Large_AdaMW | 0 | null | transformers | 37,921 | # My Story model
{'top_p': 0.9, 'top_k': 50, 'temperature': 1, 'repetition_penalty': 1}
Arthur goes to the beach. Arthur was at the beach. His parents got him a towel for the trip. He lay down and got out of the sand. Arthur put on his towel and went to the ocean. He felt very refreshed as he surfed and swam for a bit.
Arthur goes to the beach. Arthur has always been scared to go to the beach. But his friends convinced him to go. Arthur decided to try it. He found the water to be really cold. He turned around and went back to the car.
Arthur goes to the beach. Arthur was very lonely. He decided to go to the beach. He packed his bathing suit and towel. He got ready to go to the beach. Arthur arrived at the beach and relaxed on his chair.
Arthur goes to the beach. Arthur loved to surf and was always looking for new places to surf. He decided to head to the beach with his friends. Arthur drove for hours to find the spot and found it. Arthur and his friends went in and made it their new place. Arthur and his friends spent all day playing in the sun.
Arthur goes to the beach. Arthur really wanted to go to the beach. Arthur was afraid of the cold water. Arthur called a friend for a swim meetup. Arthur met up with his friend. Arthur had a fun time at the beach at the end of the day.
{'top_p': 0.9, 'top_k': 50, 'temperature': 1, 'repetition_penalty': 1.05}
Arthur goes to the beach. Arthur loves to swim. He decides to go swimming at the beach. Arthur gets a towel and a water bottle. He swam all afternoon. At the end of the day, he was soaked!
Arthur goes to the beach. Arthur always wanted to go to the beach. One day his friends told him he had to go. Arthur called the beach and made plans. The next morning he drove to the beach. Arthur had a great time at the beach that day!
Arthur goes to the beach. Arthur was always bored with life. He had no idea where to go on vacation. Arthur decided to go to the beach. He packed up his bag and drove to the beach. Arthur found it so much fun that he left the city.
Arthur goes to the beach. Arthur went to the beach with his friends. They decided to go swimming. Arthur thought it would be fun to jump in the water. He splashed around until the sun was shining in the sky. After the sun came up, Arthur swam out into the ocean.
Arthur goes to the beach. Arthur was feeling lonely one day. He decided to go to the beach. He packed his bag and drove to the beach. He walked to the beach and looked for many people. The people were nice and he met a new friend.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.8, 'repetition_penalty': 1.1}
Arthur goes to the beach. Arthur is going to the beach. His family tells him not to go because they have been looking forward to it. He decides to go anyway. Arthur finds the beach very relaxing. He is glad he went to the beach.
Arthur goes to the beach. Arthur had never been to the beach before. He decided to go one day. Arthur packed a bag of snacks for the trip. He made his way to the beach. When he got there, he found out it was very sunny.
Arthur goes to the beach. Arthur was having a great time at the beach with his family. He was playing in the water when he saw an angry turtle. The turtle had attacked the boat that Arthur was on. Arthur ran away as fast as he could, hoping no one would see him. But then, a huge wave crashed against the shore!
Arthur goes to the beach. Arthur is bored and decides he wants to go to the beach. He arrives at the beach and sets up his tent. He then sets up a chair and a picnic table for himself. Finally, he lays down and gets ready to go. Arthur has a great time at the beach at the end of the day!
Arthur goes to the beach. Arthur always wanted to go to the beach. His friends told him he was too old to go. Finally his parents took him out of school and took him. He drove to the beach and got his sandals and towels ready. When Arthur went to the beach, he realized it was not as bad as he thought.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.6, 'repetition_penalty': 1.15}
Arthur goes to the beach. Arthur was going to go to the beach with his friends. He packed up his things and drove to the beach. When he got there, it was very crowded. Arthur had to wait a long time to get his sandals. Finally, he finally arrived at the beach and played in the water.
Arthur goes to the beach. Arthur was very excited about going on a trip to the beach. He packed up his car and drove to the beach. When he arrived, he saw that it was very crowded. Arthur realized that he had forgotten his sunscreen! Arthur decided not to go to the beach.
Arthur goes to the beach. Arthur was out on a date with his girlfriend. They went to the beach and had fun swimming in the water. Afterwards, they walked around the beach for awhile. After walking, they saw a beautiful sunset. Finally, they left the beach and went home.
Arthur goes to the beach. Arthur was excited for his trip to the beach. He packed up his car and drove out to the beach. Once he got there, Arthur realized it was really hot outside. The air conditioning in his car was broken. Arthur decided to leave without going to the beach.
Arthur goes to the beach. Arthur wanted to go to the beach. He got his friends together and they all went to the beach. They played in the sand for a while then swam in the water. Finally, Arthur was tired but still had fun. Arthur decided he would go back next summer.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.4, 'repetition_penalty': 1.2}
Arthur goes to the beach. Arthur is feeling very bored one day. He decides he needs something to do. He heads out to the beach and finds a spot. He plays in the sand for hours. Finally, he is happy that he no longer feels bored.
Arthur goes to the beach. Arthur was going to go to the beach with his friends. He had never been before but he decided to try it. They all packed up their things and headed out. When they got there, Arthur realized that he forgot his sunscreen! Luckily, his friend brought him a bottle of water so he could use it.
Arthur goes to the beach. Arthur had always wanted to go to the beach. He saved up his money for a week and finally went on vacation. On the day of his trip, he was so excited that he forgot all about work! He spent hours at the beach and even more when he got home. Afterwards, he decided he would never forget to pay attention to work again.
Arthur goes to the beach. Arthur is feeling very tired one day. He decides he needs something to do. He calls his friend and asks him if he wants to go to the beach. His friend says yes. They spend the afternoon playing in the sand.
Arthur goes to the beach. Arthur had always wanted to go to the beach. He saved up for a few months so he could take his trip. Finally, Arthur went to the beach and spent all day playing in the water. Afterwards, he was very tired but happy that he finally got to the beach. The next morning, he decided it would be best to go back home.
|
panapelli/nlp-udesa-BertXNLI | 97737ce8ac580f42597824fcd783f03217b96500 | 2022-06-11T17:31:53.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | panapelli | null | panapelli/nlp-udesa-BertXNLI | 0 | null | transformers | 37,922 | Entry not found |
huggingtweets/cz_binance | a9da2b19d93ff4783c3047976274eddec0b1b485 | 2022-06-05T21:10:41.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/cz_binance | 0 | null | transformers | 37,923 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1520776623972356097/DKttTgse_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">CZ 🔶 Binance</div>
<div style="text-align: center; font-size: 14px;">@cz_binance</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from CZ 🔶 Binance.
| Data | CZ 🔶 Binance |
| --- | --- |
| Tweets downloaded | 1737 |
| Retweets | 43 |
| Short tweets | 256 |
| Tweets kept | 1438 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/23obnmq7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cz_binance's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2vchr3mr) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2vchr3mr/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cz_binance')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
sactisudesa/test1 | 34cea3a0fe70a0ebf717494526662e6d0615f642 | 2022-06-05T21:18:31.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | sactisudesa | null | sactisudesa/test1 | 0 | null | transformers | 37,924 | Entry not found |
victorlifan/autotrain-song_title_generate-939531516 | 427842703fe20050b3356c5dabc7bb740e669872 | 2022-06-06T15:36:11.000Z | [
"pytorch",
"t5",
"text2text-generation",
"unk",
"dataset:victorlifan/autotrain-data-song_title_generate",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | victorlifan | null | victorlifan/autotrain-song_title_generate-939531516 | 0 | 1 | transformers | 37,925 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- victorlifan/autotrain-data-song_title_generate
co2_eq_emissions: 11.013963276910237
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 939531516
- CO2 Emissions (in grams): 11.013963276910237
## Validation Metrics
- Loss: 1.1184396743774414
- Rouge1: 54.9539
- Rouge2: 40.7878
- RougeL: 54.8616
- RougeLsum: 54.8682
- Gen Len: 5.1429
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/victorlifan/autotrain-song_title_generate-939531516
``` |
jppaolim/v55_Large_2E | a77b3915d4284cfcc837d25f61800a4d909838b6 | 2022-06-06T01:24:38.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | jppaolim | null | jppaolim/v55_Large_2E | 0 | null | transformers | 37,926 | # My Story model
{'top_p': 0.9, 'top_k': 50, 'temperature': 1, 'repetition_penalty': 1}
Arthur goes to the beach. Arthur is bored and wanted to go the beach. His friends suggest he drive to the beach. Arthur gets a ride and they take off. Arthur takes a nap and has a good time. He has so much fun at the beach he doesn't want to leave.
Arthur goes to the beach. Arthur is feeling very hungry. He decides to go to the beach. Arthur gets some food. Arthur puts his food in his cooler. Arthur goes home and doesn't feel hungry any more.
Arthur goes to the beach. Arthur always wanted to go to the beach. He saved up money so he could take his dream trip. Finally he went to the beach and it was so beautiful. He loved his trip to the beach and decided he would go again. Arthur packed his bags and went to the beach.
Arthur goes to the beach. Arthur went to the beach last weekend. He swam on the sand and looked at the ocean. He saw several people walking around on the beach. Arthur stopped to talk to them. Arthur went home and told his mother about his trip.
Arthur goes to the beach. Arthur is so excited for the weekend. He knows he needs a new bathing suit. He finds the perfect one at the beach. He spends the day relaxing and exploring the shore. Arthur cannot wait for the next trip to the beach.
{'top_p': 0.9, 'top_k': 50, 'temperature': 1, 'repetition_penalty': 1.05}
Arthur goes to the beach. Arthur is playing with his friends in the sand at the beach. His friend Tom comes by and invites him to join them. Arthur loves the beach. Arthur spends the afternoon playing in the sand. Arthur and Tom have a great day at the beach.
Arthur goes to the beach. Arthur was going to the beach. He packed his towel and his sunscreen. He drove his car to the beach. Arthur swam in the ocean. Arthur had fun at the beach.
Arthur goes to the beach. Arthur is bored one day and decides he wants to go to the beach. He packs up his surfboard, towel, and sunscreen. Arthur goes to the ocean and spends the day there. He goes home and tells his mom about his day. Arthur is happy that he took a trip to the beach.
Arthur goes to the beach. Arthur loved the beach. He got his towel and sandals. He went out into the ocean. Arthur was shocked by the cold ocean. He decided he needed to go back home.
Arthur goes to the beach. Arthur really wants to go to the beach. His friend tells him it is too hot out. Arthur convinces his friend to come with him. They drive to the beach. Arthur spends the day playing in the ocean.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.8, 'repetition_penalty': 1.1}
Arthur goes to the beach. Arthur is going to the beach. He has packed his beach towel and sunscreen. Once he gets to the beach he finds a spot to sit down. He relaxes for a while and then swims in the water. Arthur loves the beach!
Arthur goes to the beach. Arthur is very bored. He decides to head to the beach. At the beach he relaxes on the sand. Then he gets out of his car and checks out. Arthur has spent the day at the beach.
Arthur goes to the beach. Arthur had always wanted to visit the ocean. He has saved his money for many Years. Finally he saves up enough money. Arthur takes a trip to the beach. He spends the whole day in the ocean.
Arthur goes to the beach. Arthur was so excited that he had packed his swimming trunks. He was going to the beach and he couldn't wait to swim! When he got to the beach, he saw it was closed for cleaning. He asked his mom if she would take him to the beach anyway. She said yes, but Arthur could have a picnic instead.
Arthur goes to the beach. Arthur is going to the beach with his friends today. He needs a bathing suit but doesn't have one. He decides to go without a bathing suit. When he gets there, he sees that they have a long line. Arthur finally finds a nice one and swims in the water.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.6, 'repetition_penalty': 1.15}
Arthur goes to the beach. Arthur is going on vacation with his family. He asks if they want to go to the beach. They agree and he drives them there. When they get to the beach, Arthur falls in love with a beautiful girl. Arthur and his family spend the rest of their trip together.
Arthur goes to the beach. Arthur is very bored on a hot day. He decides he needs something to do. He heads down to the local beach. He spends all day playing in the sand and sun. Arthur is happy that he no longer feels bored.
Arthur goes to the beach. Arthur was bored one day. He decided to go to the beach. Arthur packed a towel and sunscreen. Then, he went out into the ocean. Arthur had fun at the beach.
Arthur goes to the beach. Arthur was bored at home one day. He decided he would go to the beach. Arthur packed up his car and drove to the beach. Arthur laid on the sand enjoying the sun. Afterwards, Arthur went back home.
Arthur goes to the beach. Arthur was bored one afternoon so he decided to go to the beach. He packed his cooler and drove to the beach. Arthur found a spot on the sand that looked nice. He laid out his towel and sunblock and went for a swim. Arthur had such a great time at the beach!
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.4, 'repetition_penalty': 1.2}
Arthur goes to the beach. Arthur was bored one day and wanted something to do. He decided to go to the beach. At the beach he played in the sand. Then he went swimming in the ocean. Finally, he came back home exhausted but happy.
Arthur goes to the beach. Arthur is bored one day and wants something to do. He decides he would like to go to the beach. Arthur packs up his car and drives to the beach. Once there, he spends a few hours playing in the sand. Afterwards, Arthur has a good time at the beach.
Arthur goes to the beach. Arthur is bored one day and decides to go to the beach. He packs up his towel, swims in the ocean, and gets out of his car. When he arrives at the beach it's very sunny and nice. Arthur spends all day playing in the water. Afterwards, he comes home and rests for a bit.
Arthur goes to the beach. Arthur is bored one day. He decides he needs something to do. He calls his friend Steve and asks if they want to go to the beach. Steve tells Arthur that it's not a good idea to go to the beach. Now Arthur knows that he should have asked Steve for advice.
Arthur goes to the beach. Arthur is bored at home one day. He decides he needs something to do. He heads out to the local beach and plays in the sand. At the beach, Arthur sees many beautiful people. Arthur feels happy that he no longer feels bored.
|
sactisudesa/test2 | 8ae13deaaa328700b3f9e4b0f49ce6751268d33c | 2022-06-06T01:05:50.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | sactisudesa | null | sactisudesa/test2 | 0 | null | transformers | 37,927 | Entry not found |
panapelli/nlp-udesa-BertXNLI_2e | 9aa18807c89031395c8b8fa334a758a6575736af | 2022-06-11T22:26:47.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | panapelli | null | panapelli/nlp-udesa-BertXNLI_2e | 0 | null | transformers | 37,928 | Entry not found |
joshanashakya/old_mini_codebert_sourcecode_nmt_pn2ja_50E_5e-05LR | ed89c6072a25a1f5ad22d908c1c2a2650e1a2995 | 2022-06-06T01:42:38.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | joshanashakya | null | joshanashakya/old_mini_codebert_sourcecode_nmt_pn2ja_50E_5e-05LR | 0 | null | transformers | 37,929 | Entry not found |
joshanashakya/old_mini_codebert_sourcecode_nmt_ja2pn_50E_5e-05LR | e0dc95e3d8155fef25253c9b4a35af11a61b1e5b | 2022-06-06T01:45:32.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | joshanashakya | null | joshanashakya/old_mini_codebert_sourcecode_nmt_ja2pn_50E_5e-05LR | 0 | null | transformers | 37,930 | Entry not found |
joshanashakya/old_mini_codebert_sourcecode_nmt_ja2pn_200E_5e-05LR | 809708262bb9fc341ae807ff484e5a215388d519 | 2022-06-06T04:43:40.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | joshanashakya | null | joshanashakya/old_mini_codebert_sourcecode_nmt_ja2pn_200E_5e-05LR | 0 | null | transformers | 37,931 | Entry not found |
mriggs/tgb_100_epoch1 | 308f7e03128ca18f56e7a8a4584461823b2ca28a | 2022-06-06T06:02:07.000Z | [
"pytorch",
"flaubert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | mriggs | null | mriggs/tgb_100_epoch1 | 0 | null | transformers | 37,932 | Entry not found |
joshanashakya/old_codebert_sourcecode_nmt_pn2ja_50E_5e-05LR | 949996d874e39f1107c33eadb4d54fe185580d83 | 2022-06-06T06:42:18.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | joshanashakya | null | joshanashakya/old_codebert_sourcecode_nmt_pn2ja_50E_5e-05LR | 0 | null | transformers | 37,933 | Entry not found |
stig/distilbert-base-uncased-finetuned-squad | 261dac926b20c6144dba86f1cf40bdaae8426c9b | 2022-06-06T15:40:07.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | stig | null | stig/distilbert-base-uncased-finetuned-squad | 0 | null | transformers | 37,934 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8545
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0122 | 1.0 | 2312 | 1.8973 |
| 1.7666 | 2.0 | 4624 | 1.8320 |
| 1.5729 | 3.0 | 6936 | 1.8545 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|
zakria/repo_name | 5629a6fe74c70116891404b82253b71d5b2533ad | 2022-06-06T11:23:23.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | zakria | null | zakria/repo_name | 0 | null | transformers | 37,935 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: repo_name
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# repo_name
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
|
jppaolim/v56_Large_2E | aa7fabbfe5f2a63fc966731bcdd41047b2c9a98f | 2022-06-06T12:17:56.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | jppaolim | null | jppaolim/v56_Large_2E | 0 | null | transformers | 37,936 | # My Story model
{'top_p': 0.9, 'top_k': 50, 'temperature': 1, 'repetition_penalty': 1}
Arthur goes to the beach. Arthur is in love with his girlfriend. They go to the beach together. Arthur falls off the beach. Arthur needs medical attention. Arthur gets help at the beach.
Arthur goes to the beach. Arthur is feeling bored. He looks on the internet for something exciting. Arthur looks in the paper for something interesting. He sees that there is going to be a beach nearby. Arthur heads to the beach and meets some people there.
Arthur goes to the beach. Arthur always had a lot of fun at the beach. However, one day he decided to go swimming. Arthur had been there for hours and it was getting dark. Finally, he decided to go back home. Arthur went home and was happy.
Arthur goes to the beach. Arthur has never been to the beach. His friends tell him that it is very hot. He finally gets the courage to go. He spends his first day at the beach. Arthur cannot wait to come back.
Arthur goes to the beach. Arthur is so bored one day. He decides to go to the beach. He sees a bunch of people playing. He decides to join in. Arthur plays in the ocean with his friends.
{'top_p': 0.9, 'top_k': 50, 'temperature': 1, 'repetition_penalty': 1.05}
Arthur goes to the beach. Arthur was excited about having his yearly family trip. He had booked a hotel for two in a beautiful beach. The day before his trip he went to the beach to go swimming. He loved the sand and the sun very much. Arthur spent the rest of his trip relaxing and relaxing.
Arthur goes to the beach. Arthur is a lonely man. He hasn't been out in Years. He decides to head out to the ocean for a walk. He walks all day long and has a wonderful time. After he gets home he is glad he went to the beach.
Arthur goes to the beach. Arthur was swimming at the beach. He swam into the deep water. A large wave hit Arthur. It carried him into the ocean. Arthur couldn't get back out of the water.
Arthur goes to the beach. Arthur loves the beach. He decided to go to the beach one day. At the beach he jumped in the ocean. As he jumped he hit his head. Arthur is glad he jumped in the ocean.
Arthur goes to the beach. Arthur was at the beach. He decided to jump into the water. Arthur wasn't wearing his sunscreen. Arthur got very burned on the beach. Arthur had to go home and change his clothes.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.8, 'repetition_penalty': 1.1}
Arthur goes to the beach. Arthur went to the beach with his friend Rob. They played in the sand for an hour. Rob told Arthur that it was hot out. Arthur and Rob ran back home to put on sunscreen. Arthur went back to the beach without playing in the sand.
Arthur goes to the beach. Arthur is on vacation. He decides he would like to go to the beach. He goes to the beach. His friend takes him to eat at a seafood restaurant. They both have fun at the beach.
Arthur goes to the beach. Arthur had never been to the ocean before. His friends took him to the beach one day. He played in the water for an hour. Arthur then went home and rested. Arthur felt very happy and refreshed after that.
Arthur goes to the beach. Arthur went to the beach on vacation. He was bored and wanted some fun activity. He looked around for something fun. Arthur saw a friend of his at the beach. Arthur and his friend had fun playing together.
Arthur goes to the beach. Arthur is a lonely man. He decides he needs some company. Arthur gets on his boat and heads to the ocean. While at the beach, Arthur falls in love with a beautiful woman. Now Arthur has company.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.6, 'repetition_penalty': 1.15}
Arthur goes to the beach. Arthur is a very active person. He loves going to the beach. One day he finds a spot on the sand where he can play in peace. The water is so calm and so peaceful that Arthur cannot help but swim. Now Arthur is a world renown ocean swimmer.
Arthur goes to the beach. Arthur is on a vacation with his family. He decides to go to the beach. He gets all his friends together and they play in the sand. Afterwards, Arthur has fun at the beach. Now he is ready for his next adventure.
Arthur goes to the beach. Arthur is a little boy who loves going to the beach. He spends all his time playing in the sand and sun. One day he notices that it has started raining very hard. Arthur rushes home to take cover. Arthur gets soaked by the rain so he can go play again.
Arthur goes to the beach. Arthur is bored on a sunny day. He decides to go to the beach. Arthur gets his towel and sandals ready. He drives to the beach. Arthur spends the rest of the day at the beach.
Arthur goes to the beach. Arthur is feeling lonely one day. He decides to go on a trip to the beach. At the beach he has a blast. However, he sees an injured turtle. He rescues the turtle and returns it to its home.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.4, 'repetition_penalty': 1.2}
Arthur goes to the beach. Arthur is a very lonely boy. He wants to meet new people but he doesn't know where to go. One day his friend tells him about going to the beach. The next day Arthur gets ready and leaves for the beach. At the beach, Arthur meets lots of nice people and makes many friends!
Arthur goes to the beach. Arthur is going on vacation with his family. He has never been to the beach before. His parents tell him he needs a towel first. So Arthur gets a towel and puts it in the sand box. The next morning, Arthur takes a dip in the ocean.
Arthur goes to the beach. Arthur is bored on a weekend afternoon. He decides he would like to go to the beach. He gets his towel and sunscreen. Then he drives to the beach. Finally, Arthur has fun at the beach.
Arthur goes to the beach. Arthur is on vacation in Florida with his family. His family decides that they want to go to the beach. They all pack their towels and sunscreen. When Arthur gets there, he sees a lot of people at the beach. He spends most of his time playing in the sand instead of swimming.
Arthur goes to the beach. Arthur is bored at home. He decides to go out to the ocean. Arthur gets in his car and drives to the beach. At the beach he plays in the sand. Arthur has a great time on the beach.
|
jontooy/AraBERT32-COCO | eec72491ee0541cda4b9c1f703cefa87d142718b | 2022-06-06T12:14:57.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | fill-mask | false | jontooy | null | jontooy/AraBERT32-COCO | 0 | null | transformers | 37,937 | ---
license: afl-3.0
---
|
twieland/VN_ja_to_en | 2faf7283a65a369146759f9abea24815ad4e1bc1 | 2022-06-06T17:04:40.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | twieland | null | twieland/VN_ja_to_en | 0 | null | transformers | 37,938 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: VN_ja_to_en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# VN_ja_to_en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ja-en](https://huggingface.co/Helsinki-NLP/opus-mt-ja-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0411
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 2.0489 | 1.0 | 10276 | 2.0716 |
| 1.9028 | 2.0 | 20552 | 2.0646 |
| 1.812 | 3.0 | 30828 | 2.0525 |
| 1.7531 | 4.0 | 41104 | 2.0487 |
| 1.7083 | 5.0 | 51380 | 2.0375 |
| 1.6717 | 6.0 | 61656 | 2.0415 |
| 1.6354 | 7.0 | 71932 | 2.0398 |
| 1.6146 | 8.0 | 82208 | 2.0390 |
| 1.5972 | 9.0 | 92484 | 2.0391 |
| 1.582 | 10.0 | 102760 | 2.0411 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
sayanmandal/t5-small_6_3-en-hi_en_LinCE_bt | 4e8cf666fd859c20ec49a0ca1cf85fa7c8f9d569 | 2022-06-06T14:25:30.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sayanmandal | null | sayanmandal/t5-small_6_3-en-hi_en_LinCE_bt | 0 | null | transformers | 37,939 | Entry not found |
jontooy/GigaBERT32-Flickr8k | dcf776e537dc5dd8815087cf5d191467c0e09a99 | 2022-06-06T12:29:08.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers",
"license:afl-3.0"
] | feature-extraction | false | jontooy | null | jontooy/GigaBERT32-Flickr8k | 0 | null | transformers | 37,940 | ---
license: afl-3.0
---
|
huggingtweets/byelihoff | 300f5236258c8ac1ec8511235445aa980b6112a1 | 2022-06-07T01:08:05.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/byelihoff | 0 | null | transformers | 37,941 | ---
language: en
thumbnail: http://www.huggingtweets.com/byelihoff/1654564001530/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1481727546186211329/U8AeI0cS_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Eli Hoff</div>
<div style="text-align: center; font-size: 14px;">@byelihoff</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Eli Hoff.
| Data | Eli Hoff |
| --- | --- |
| Tweets downloaded | 3248 |
| Retweets | 821 |
| Short tweets | 187 |
| Tweets kept | 2240 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3t22q7l3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @byelihoff's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3qqqbwen) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3qqqbwen/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/byelihoff')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/bigmanbakar | b0059a782b51fbc1066260f559b16e2ff416ab81 | 2022-06-06T13:49:15.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/bigmanbakar | 0 | null | transformers | 37,942 | ---
language: en
thumbnail: http://www.huggingtweets.com/bigmanbakar/1654523350313/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1459686915498819587/cYF4VOWO_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">AbuBakar Siddiq</div>
<div style="text-align: center; font-size: 14px;">@bigmanbakar</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from AbuBakar Siddiq.
| Data | AbuBakar Siddiq |
| --- | --- |
| Tweets downloaded | 3244 |
| Retweets | 452 |
| Short tweets | 769 |
| Tweets kept | 2023 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ggb85vg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bigmanbakar's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1qafbtox) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1qafbtox/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/bigmanbakar')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/briangrimmett | 87161fc22ecb30656a5971de6ab00b16d7ca5284 | 2022-06-06T14:15:11.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/briangrimmett | 0 | null | transformers | 37,943 | ---
language: en
thumbnail: http://www.huggingtweets.com/briangrimmett/1654524569583/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1335009788212748291/X5EyBri8_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Brian Grimmett</div>
<div style="text-align: center; font-size: 14px;">@briangrimmett</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Brian Grimmett.
| Data | Brian Grimmett |
| --- | --- |
| Tweets downloaded | 3248 |
| Retweets | 1502 |
| Short tweets | 129 |
| Tweets kept | 1617 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3nan0dmd/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @briangrimmett's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1mpmndjc) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1mpmndjc/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/briangrimmett')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
sayanmandal/t5-small_6_3-en-hi_en__noBT | df58918a14477e08410913d5c9bb0cd14252d194 | 2022-06-06T20:36:29.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sayanmandal | null | sayanmandal/t5-small_6_3-en-hi_en__noBT | 0 | null | transformers | 37,944 | Entry not found |
huggingtweets/jeffwhou | 142220a8dfbec5c8cd284c7cbe6b9450a9af3b43 | 2022-06-06T15:44:59.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/jeffwhou | 0 | null | transformers | 37,945 | ---
language: en
thumbnail: http://www.huggingtweets.com/jeffwhou/1654530271923/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1505206395595104264/y3dWH2tq_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">jeffhou.eth</div>
<div style="text-align: center; font-size: 14px;">@jeffwhou</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from jeffhou.eth.
| Data | jeffhou.eth |
| --- | --- |
| Tweets downloaded | 3239 |
| Retweets | 817 |
| Short tweets | 238 |
| Tweets kept | 2184 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2o4ngo7h/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jeffwhou's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1nn8iggq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1nn8iggq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jeffwhou')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/mattcocco | a8953fc1e03b8dfc0f6b6a193c56edd124c9aa1e | 2022-06-06T16:08:44.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/mattcocco | 0 | null | transformers | 37,946 | ---
language: en
thumbnail: http://www.huggingtweets.com/mattcocco/1654531718885/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/494875249347788801/0uf8T9i-_400x400.jpeg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Matt Cocco</div>
<div style="text-align: center; font-size: 14px;">@mattcocco</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Matt Cocco.
| Data | Matt Cocco |
| --- | --- |
| Tweets downloaded | 3247 |
| Retweets | 162 |
| Short tweets | 366 |
| Tweets kept | 2719 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2pahfj7y/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mattcocco's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2iiga7st) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2iiga7st/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mattcocco')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
lorenzkuhn/roberta-base-finetuned-squad | dfac265eaf19a8160fb16b5cdf18e7b2f5334df6 | 2022-06-08T20:05:38.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | lorenzkuhn | null | lorenzkuhn/roberta-base-finetuned-squad | 0 | null | transformers | 37,947 | Entry not found |
jmilic/adapter_bottleneck_final-2 | c726da860d72db330fa3b4342bde22dfb38294ff | 2022-06-06T17:46:05.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | jmilic | null | jmilic/adapter_bottleneck_final-2 | 0 | null | transformers | 37,948 | Entry not found |
huggingtweets/nonewthing | 94656f3292ee3236a6c1e6fab47ed0e2d6205c11 | 2022-06-06T17:50:00.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/nonewthing | 0 | null | transformers | 37,949 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1532336212412977152/TWPqTO8d_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">AI</div>
<div style="text-align: center; font-size: 14px;">@nonewthing</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from AI.
| Data | AI |
| --- | --- |
| Tweets downloaded | 3247 |
| Retweets | 100 |
| Short tweets | 234 |
| Tweets kept | 2913 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/bf84hrrd/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nonewthing's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/169zdg1z) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/169zdg1z/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/nonewthing')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
shwetha/autotrain-qa-user-954831770 | 3fb8572cf0766741495bfa234dd93bc57ad50049 | 2022-06-06T18:54:38.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | shwetha | null | shwetha/autotrain-qa-user-954831770 | 0 | null | transformers | 37,950 | Entry not found |
StratumTest/DialoGPT-small-joshua | 7faf715541f5cb6117746825ed87466fe6b6a030 | 2022-06-07T01:01:03.000Z | [
"pytorch"
] | null | false | StratumTest | null | StratumTest/DialoGPT-small-joshua | 0 | null | null | 37,951 | Entry not found |
mailenpellegrino/transformer2 | fb97e94b79a111047461f3353f0d1c6b3f489260 | 2022-06-06T21:32:09.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | mailenpellegrino | null | mailenpellegrino/transformer2 | 0 | null | transformers | 37,952 | Entry not found |
huggingtweets/mcbrideace-sorarescp-thedonofsorare | ae404a32b4b231c35c09ff9eba6fac2de92f7eee | 2022-06-06T22:20:27.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/mcbrideace-sorarescp-thedonofsorare | 0 | null | transformers | 37,953 | ---
language: en
thumbnail: http://www.huggingtweets.com/mcbrideace-sorarescp-thedonofsorare/1654554022265/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1462464744200323076/q_vEAFLx_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1454346046319038465/qivKQRrg_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1527184416077922304/Dpk_AXXK_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">The Don & McBriceAce.eth & Sonhos_10A </div>
<div style="text-align: center; font-size: 14px;">@mcbrideace-sorarescp-thedonofsorare</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from The Don & McBriceAce.eth & Sonhos_10A .
| Data | The Don | McBriceAce.eth | Sonhos_10A |
| --- | --- | --- | --- |
| Tweets downloaded | 3247 | 3248 | 2974 |
| Retweets | 148 | 293 | 1612 |
| Short tweets | 334 | 618 | 273 |
| Tweets kept | 2765 | 2337 | 1089 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1omlhh4m/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mcbrideace-sorarescp-thedonofsorare's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1kamm6ws) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1kamm6ws/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mcbrideace-sorarescp-thedonofsorare')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
jppaolim/v57_Large_3E | ab25dc64d47fe6ead64719800169cba69680ba61 | 2022-06-06T23:35:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | jppaolim | null | jppaolim/v57_Large_3E | 0 | null | transformers | 37,954 | # My Story model
{'top_p': 0.9, 'top_k': 50, 'temperature': 1, 'repetition_penalty': 1}
Arthur goes to the beach. Arthur and his friends go to the beach one day. They go swimming. Then they play volleyball. Arthur is so tired he falls asleep on the beach. Arthur wakes up later and they never go back.
Arthur goes to the beach. Arthur was out surfing. He was having a blast. He got a bit too excited. He got in too much trouble. Arthur left the beach and went home.
Arthur goes to the beach. Arthur is bored at home. He decides to go to the beach. Arthur likes the beach. He enjoys the beach for an hour. Arthur returns home exhausted but happy.
Arthur goes to the beach. Arthur is bored of his suburban life. He decides to take a big trip to the beach. Arthur packs up all his things. He boards the ferry. Arthur takes a nice relaxing stroll on the beach.
Arthur goes to the beach. Arthur was bored. He decided to go to the beach. He got in his car and drove to the beach. At the beach he enjoyed the waves and the sand. Arthur decided to come back the next day.
{'top_p': 0.9, 'top_k': 50, 'temperature': 1, 'repetition_penalty': 1.05}
Arthur goes to the beach. Arthur and his friend wanted to go to the beach. They loaded up the car with beach towels, sunscreen and snacks. Arthur packed a cooler full of drinks and food. They drove to the beach. There was a long line, but they finally got to the beach.
Arthur goes to the beach. Arthur was a sleepy boy. He wanted to play a game but he wasn't very good at it. His mother told him to practice on the weekends. Every weekend he practiced his volleyball game. After a month Arthur became very good at the game.
Arthur goes to the beach. Arthur has been working all day long at his job. He needs a break from work and decides to go to the beach. At the beach he spends a week playing in the sand. He returns home to his family. Arthur is glad that he had a break from work.
Arthur goes to the beach. Arthur is going on a trip to the beach with his friends. He asks for an hour of sleep so he can get ready for the trip. When Arthur wakes up it's dark outside. He rushes to get ready and heads to the beach. Arthur arrives at the beach, exhausted but happy.
Arthur goes to the beach. Arthur is a lonely man. He has been living in the city for Years. One day an older woman passes by. She tells Arthur she misses him. She invites him to go to the beach to make her feel better.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.8, 'repetition_penalty': 1.1}
Arthur goes to the beach. Arthur is feeling very bored on a Saturday afternoon. He decides to go to the beach. He gets in his car and drives to the beach. At the beach, he spends hours playing with his friends. Finally, after a long day of fun, Arthur returns home.
Arthur goes to the beach. Arthur is feeling very bored on a weekend day. He decides that he would like to play in the sand. Arthur spends all morning walking around the beach. At noon he goes into the water and swims for two hours. Now that he has played in the sand, Arthur feels very happy.
Arthur goes to the beach. Arthur loves the ocean. He always wants to get a job in it. One day he gets an amazing job offer. The company hires him for his skills. Now Arthur lives on the beach and loves it.
Arthur goes to the beach. Arthur wanted to go to the beach one sunny day. He packed up his towel and sunscreen before going in the water. Arthur went to the beach and laid out on the sand. He began swimming and having fun for a few hours. When it was time for dinner, Arthur went home with a sunburn.
Arthur goes to the beach. Arthur loves to surf. He asks his friends if they want to go out to the beach. They agree to go. Arthur and his friends go out to the beach. Arthur has a great time surfing at the beach.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.6, 'repetition_penalty': 1.15}
Arthur goes to the beach. Arthur is having a good day at work. He is working on his computer. He gets home and realizes that he forgot to take his sunscreen. He heads to the store and buys some. Now Arthur can't wait for the beach!
Arthur goes to the beach. Arthur is feeling very bored on a Friday evening. He decides he would like to go to the beach. At the beach, Arthur sees many beautiful beaches. However, he cannot find any nice ones that are open all day. Finally, at night, Arthur heads home.
Arthur goes to the beach. Arthur is sitting at home. He decides he wants to go to the beach. He gets in his car and drives to the beach. He spends a day playing in the sand. Finally, he heads back home.
Arthur goes to the beach. Arthur is very sad that his friend won't go to the beach with him. He asks his mom if she can take him but her answer is no. Finally he gets a surprise from his mom. She tells Arthur that he has to go to the beach with him. Arthur spends the whole day at the beach with his friends.
Arthur goes to the beach. Arthur was very happy when he got off work early to go to the beach. He packed his towel and sunscreen, but forgot his umbrella! As he sat on the sand, it began to rain hard. Arthur ran down the beach as fast as he could, but didn't bring his umbrella. When he finally arrived at the beach, he found that it had rained!
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.4, 'repetition_penalty': 1.2}
Arthur goes to the beach. Arthur is going to the beach with his friends. He has never been to the beach before. They all get ready for the trip. When they arrive, Arthur and his friends begin to play in the sand. The beach was a wonderful experience for Arthur.
Arthur goes to the beach. Arthur is feeling very bored one day at home. He decides he would like to go to the beach. At the beach he spends all day playing in the water. When it gets dark Arthur heads back home. Arthur is happy that he went to the beach today.
Arthur goes to the beach. Arthur is sitting at home one day. He decides he would like to go to the beach. He calls his friends and invites them over for a fun day of swimming. They all show up and spend time in the water. It was a great trip to the beach!
Arthur goes to the beach. Arthur is bored at home. He decides he should go to the beach. At the beach, Arthur sees a beautiful sunset. The sunset turns into a full moon. Now Arthur loves the beach even more than at home.
Arthur goes to the beach. Arthur is sitting at home bored out of his mind. He decides he needs something fun to do. He calls up some friends and asks if they want to go to the beach. They all agree that it would be a good idea. The three boys spend the day playing in the ocean.
|
huggingtweets/heylookaturtle | 2e2a41f137d8b008293aebca6964287e66f0ea7e | 2022-06-07T00:50:23.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/heylookaturtle | 0 | null | transformers | 37,955 | ---
language: en
thumbnail: http://www.huggingtweets.com/heylookaturtle/1654563018664/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1052029344254701568/2yAQKb6K_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Adam Porter</div>
<div style="text-align: center; font-size: 14px;">@heylookaturtle</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Adam Porter.
| Data | Adam Porter |
| --- | --- |
| Tweets downloaded | 3232 |
| Retweets | 1006 |
| Short tweets | 436 |
| Tweets kept | 1790 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2xiwa2l6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @heylookaturtle's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/hov36pjn) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/hov36pjn/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/heylookaturtle')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/ryang73 | f3be9bbb9ed802bbba631c2664b8c96dd0dd029b | 2022-06-07T01:01:08.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/ryang73 | 0 | null | transformers | 37,956 | ---
language: en
thumbnail: http://www.huggingtweets.com/ryang73/1654563663272/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1120118423357464577/j4gzzGqe_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Ryan G</div>
<div style="text-align: center; font-size: 14px;">@ryang73</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Ryan G.
| Data | Ryan G |
| --- | --- |
| Tweets downloaded | 3207 |
| Retweets | 2096 |
| Short tweets | 323 |
| Tweets kept | 788 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/36nr3zmj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ryang73's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1viq2jo5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1viq2jo5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ryang73')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
kazed/AraBART-finetuned-xsum | a95d8ead2a711abb1a86580c3bb9f4323f89cc1d | 2022-06-07T02:38:20.000Z | [
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | kazed | null | kazed/AraBART-finetuned-xsum | 0 | null | transformers | 37,957 | Entry not found |
twieland/VN_ja-en_mt5_small | 0937021c95c705152e967d1f0631aa65b6fb7fa1 | 2022-06-07T04:14:54.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | twieland | null | twieland/VN_ja-en_mt5_small | 0 | null | transformers | 37,958 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: VN_ja-en_mt5_small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# VN_ja-en_mt5_small
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3148
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.4633 | 1.0 | 20552 | 2.3148 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
jppaolim/v58_Large_2E | 9cd3fbafdb6ca648d764b2f7b4f6385d4b7f4ff6 | 2022-06-07T05:43:25.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | jppaolim | null | jppaolim/v58_Large_2E | 0 | null | transformers | 37,959 | # My Story model
{'top_p': 0.9, 'top_k': 50, 'temperature': 1, 'repetition_penalty': 1}
Arthur goes to the beach. Arthur is in love with his girlfriend. They go to the beach together. Arthur falls asleep on the beach. He is found by his girlfriend. Arthur is very sad he went to the beach.
Arthur goes to the beach. Arthur is feeling very stressed today. He is at work but is very bored at home. Arthur decides to visit the beach. He spends all day relaxing on the beach. Arthur is happy that he no longer feels stressed at work.
Arthur goes to the beach. Arthur always had a soft spot for the ocean. For his birthday his parents decided to take him to the beach. His family rented a beach house for the day. He played in the ocean for two hours before his parents came home. Arthur said the ocean was the best day of his life!
Arthur goes to the beach. Arthur has never been to the beach. His friends tell him that it is the perfect place for him to relax. Arthur decides to take the long drive there. When he gets to the beach, he spends the day relaxing. Arthur was glad that he took the long drive to the beach.
Arthur goes to the beach. Arthur is so excited for the weekend. He knows he needs to get a nice tan. He heads down to the beach. Arthur enjoys the sand and sun. Arthur has a great day at the beach.
{'top_p': 0.9, 'top_k': 50, 'temperature': 1, 'repetition_penalty': 1.05}
Arthur goes to the beach. Arthur has never been to the beach before. Arthur and his friends decide to go to the beach. They walk around the beach for a bit. Finally they are ready to head back home. Arthur is very happy that he finally took the trip to the beach.
Arthur goes to the beach. Arthur was planning a trip with his friends. He had planned on going to the beach but then had an idea. He decided to stay home and play video games all day. When he got to the beach he was surprised how far away it was. Arthur was glad that he went to the beach but didn't get to go.
Arthur goes to the beach. Arthur loves to swim. He tries to go to the beach every week. Finally he gets to the beach. He spends all day swimming. Arthur has a wonderful time at the beach.
Arthur goes to the beach. Arthur went to the beach with his friends. Arthur was having a good time. His friends wanted to go swimming. Arthur was too shy to dive in. His friends decided to go swimming anyways.
Arthur goes to the beach. Arthur had always wanted to go to the beach. He decided to start a small trip to the beach. When Arthur got to the beach he saw many beautiful beaches. The weather was amazing so Arthur went for a swim. Arthur was glad he went to the beach.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.8, 'repetition_penalty': 1.1}
Arthur goes to the beach. Arthur was so excited for his first trip to the beach. He packed up his beach towel and swimsuit and went to the sand box. Then he laid on the sand and played in the waves. Arthur decided that this was going to be a great vacation! Arthur loved his trip to the beach.
Arthur goes to the beach. Arthur loves to go to the beach. He spends many hours every day at the beach. One day while at the beach he notices a seal swimming in the water. Arthur rushes to his friend's house and tells him about the seal. His friend is happy that Arthur is there to help him.
Arthur goes to the beach. Arthur is out at the beach with his friends. They decide to go swimming. Arthur finds a spot in the water. He swims for a while and then falls asleep. Arthur wakes up and realizes he missed the beach.
Arthur goes to the beach. Arthur is very excited to go to the beach. He takes a taxi to the beach. Arthur and his friends begin swimming in the ocean. The boys then return home. Arthur wishes he had not gone to the beach.
Arthur goes to the beach. Arthur was at the beach one day. He decided to build sand castles in the sand. Arthur's friends were jealous of his work. They all made fun of him and he became sad. Arthur went home and washed off his tears.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.6, 'repetition_penalty': 1.15}
Arthur goes to the beach. Arthur is very excited for his family's vacation this summer. He decides he wants to go on a trip to the beach. When they get to the beach, Arthur notices that it is packed. Arthur rushes back home and tells his parents about the packed beach. His parents are not happy when they learn that the beach is closed.
Arthur goes to the beach. Arthur is going on vacation. He has decided he wants to go to the beach. His friends tell him not to but he ignores them. Finally his friends convince him to go. Arthur loves the beach and spends his vacation there.
Arthur goes to the beach. Arthur is going on a trip with his family. They are going to go to the beach. Arthur gets dressed and packed up. He boards the plane. Arthur has a great time at the beach.
Arthur goes to the beach. Arthur is a boy who loves the ocean. One day his family takes him to the beach. He spends all day playing in the sand. Afterwards he heads home. Arthur is happy that he spent time with his friends.
Arthur goes to the beach. Arthur is bored one day. He decides he would like to go to the beach. He gets his bathing suit ready and goes for a swim. After swimming, Arthur gets sand in his eyes. Arthur does not enjoy going to the beach after all.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.4, 'repetition_penalty': 1.2}
Arthur goes to the beach. Arthur is going on a trip with his friends. They decide to go to the beach. When they get there, Arthur sees that it's very busy. He and his friends have to wait in line for an hour. Finally, they are able to play in the sand.
Arthur goes to the beach. Arthur is a lonely boy. He has no friends. One day he decides to go to the beach. At the beach he meets many people and becomes very social. Now Arthur loves being at the beach.
Arthur goes to the beach. Arthur is bored one day and decides he needs a vacation. He calls his friends but they are busy. Finally he calls his friend Tim who lives in Florida. Tim tells Arthur that he will take him to the beach on Saturday. Saturday comes and Arthur has a great time at the beach!
Arthur goes to the beach. Arthur is going on a vacation with his family. He asks his parents if he can go to the beach. His parents tell him no. Arthur gets angry and storms off. The next day Arthur has a bad sunburn.
Arthur goes to the beach. Arthur was going on a trip with his friends. They were all excited about their upcoming vacation. When they arrived at the beach, Arthur saw that it was very busy. He decided to go swimming instead of playing in the sand. His friends appreciated him for being so considerate and he had fun!
|
quynhanh12345/segformer-b0-finetuned-ade-512-512 | 69e655473aa7e7aaf1d543eb6660e9f7333b832e | 2022-06-08T04:14:29.000Z | [
"pytorch",
"segformer",
"transformers"
] | null | false | quynhanh12345 | null | quynhanh12345/segformer-b0-finetuned-ade-512-512 | 0 | null | transformers | 37,960 | Entry not found |
prashanth/IndicBART-ibart-en-to-hi | 27283702e150d67b2325446a06729cd3b6475dc3 | 2022-06-07T09:45:31.000Z | [
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"dataset:hindi_english_machine_translation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | prashanth | null | prashanth/IndicBART-ibart-en-to-hi | 0 | null | transformers | 37,961 | ---
tags:
- generated_from_trainer
datasets:
- hindi_english_machine_translation
model-index:
- name: IndicBART-ibart-en-to-hi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IndicBART-ibart-en-to-hi
This model is a fine-tuned version of [ai4bharat/IndicBART](https://huggingface.co/ai4bharat/IndicBART) on the hindi_english_machine_translation dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 157 | 4.7112 | 0.8663 | 20.0 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu102
- Datasets 1.18.0
- Tokenizers 0.12.1
|
nestoralvaro/mt5-base-finetuned-xsum-data_prep_2021_12_26___t55_403.csv___topic_text_google_mt5_base | 54e0aff14aa31db573887151ce634d0d5090078e | 2022-06-07T12:57:21.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | nestoralvaro | null | nestoralvaro/mt5-base-finetuned-xsum-data_prep_2021_12_26___t55_403.csv___topic_text_google_mt5_base | 0 | null | transformers | 37,962 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-finetuned-xsum-data_prep_2021_12_26___t55_403.csv___topic_text_google_mt5_base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-xsum-data_prep_2021_12_26___t55_403.csv___topic_text_google_mt5_base
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 0.9647
- Rouge2: 0.1331
- Rougel: 0.9633
- Rougelsum: 0.9627
- Gen Len: 6.4489
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.0 | 1.0 | 36479 | nan | 0.9647 | 0.1331 | 0.9633 | 0.9627 | 6.4489 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
giolisandro/t5-small-finetuned-en-to-ro | e02a67d512bac95d08b67b608a20be66f42b3765 | 2022-06-07T11:30:00.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wmt16",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | giolisandro | null | giolisandro/t5-small-finetuned-en-to-ro | 0 | null | transformers | 37,963 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
model-index:
- name: t5-small-finetuned-en-to-ro
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-ro
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 94 | 1.4141 | 7.3474 | 18.2586 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
huggingtweets/aoc-itsjefftiedrich-shaun_vids | 3a70e5e0e398779d195bc6bab5559c260b84f2f3 | 2022-06-07T12:01:33.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/aoc-itsjefftiedrich-shaun_vids | 0 | null | transformers | 37,964 | ---
language: en
thumbnail: http://www.huggingtweets.com/aoc-itsjefftiedrich-shaun_vids/1654603284413/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1507627313604743171/T8ksXYZu_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1009932396333031424/8FzKlCfB_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/923274881197895680/AbHcStkl_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Shaun & Jeff Tiedrich & Alexandria Ocasio-Cortez</div>
<div style="text-align: center; font-size: 14px;">@aoc-itsjefftiedrich-shaun_vids</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Shaun & Jeff Tiedrich & Alexandria Ocasio-Cortez.
| Data | Shaun | Jeff Tiedrich | Alexandria Ocasio-Cortez |
| --- | --- | --- | --- |
| Tweets downloaded | 3224 | 3249 | 3246 |
| Retweets | 1023 | 11 | 1236 |
| Short tweets | 212 | 713 | 126 |
| Tweets kept | 1989 | 2525 | 1884 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2znx4crj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @aoc-itsjefftiedrich-shaun_vids's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1q1etxhd) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1q1etxhd/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/aoc-itsjefftiedrich-shaun_vids')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
jppaolim/v59_Large_2E | 402549b0d959d435159fea2f0da75302a5105cc8 | 2022-06-07T13:01:39.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | jppaolim | null | jppaolim/v59_Large_2E | 0 | null | transformers | 37,965 | # My Story model
{'top_p': 0.9, 'top_k': 50, 'temperature': 1, 'repetition_penalty': 1}
Arthur goes to the beach. Arthur is in love with his girlfriend. They go to the beach together. Arthur falls off the beach. Arthur needs medical attention. Arthur gets a broken leg from the fall.
Arthur goes to the beach. Arthur is feeling cold. He looks at the weather report. He knows he needs to get out of the house. He decides to walk to the local beach. Arthur is happy he got out of the house.
Arthur goes to the beach. Arthur always hated going to the beach. His parents always made him go, even if it was just to swim. His father finally convinced him to go to the beach with him. Arthur was not happy, but he had to go anyway. At the beach, Arthur met lots of people he was interested in.
Arthur goes to the beach. Arthur has never been to the beach. His friends tell him that it is very hot. He decides to go to the beach. He enjoys his day at the beach. Now Arthur loves the beach.
Arthur goes to the beach. Arthur is so bored one day. He decides to go to the beach. He sees a nice, sunny beach. Arthur enjoys his day at the beach. Arthur is happy that he found a good day to be bored.
{'top_p': 0.9, 'top_k': 50, 'temperature': 1, 'repetition_penalty': 1.05}
Arthur goes to the beach. Arthur is out on a day of vacation. He decides to take his girlfriend out to the beach. The two surf. They surf all day long. After the sun comes up they relax on a beach blanket.
Arthur goes to the beach. Arthur was feeling very bored one day. He decided he wanted to swim in the ocean. He went to the beach to feel like he was in the ocean. When he got to the beach he was surprised how warm it was. Arthur immediately went back home and went to bed.
Arthur goes to the beach. Arthur has never been to the beach before. He is excited but also nervous about swimming. He boards his car and goes to the ocean. At first he does not like it. However, after a while, he loves the beach.
Arthur goes to the beach. Arthur was planning on going to the beach with friends. Arthur decided that he would go to the beach. When Arthur arrived, there were too many cars for him. Arthur could not see where his friends were. Arthur realized he forgot his sunscreen.
Arthur goes to the beach. Arthur is on vacation. He heads out to the ocean. Arthur spends most of the time swimming. Arthur falls asleep on the beach. He gets up the next day and heads home.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.8, 'repetition_penalty': 1.1}
Arthur goes to the beach. Arthur is going on a trip. He decides to take his girlfriend Mary with him. They decide to go to the beach. When Arthur gets there he realizes that it's too hot. His girlfriend has no choice but to stay home.
Arthur goes to the beach. Arthur is on vacation in the beach. He enjoys taking his swim. However, a storm comes and knocks Arthur's umbrella off of him. Arthur rushes to get it back. He can't swim after that.
Arthur goes to the beach. Arthur had always wanted to go to the beach. He saved up all his money for a trip to the beach. Arthur finally decided to go on vacation. While at the beach he fell in love with the water. When he got home, he was happy he went.
Arthur goes to the beach. Arthur was bored one day so he decided to go to the beach. He got a towel and swimsuit to wear and went out on the water. When Arthur arrived at the beach it was very hot. However, when he stepped into the ocean, it was a beautiful sunny day. Arthur was glad that he chose to spend his day at the beach.
Arthur goes to the beach. Arthur is on a long plane trip. He has been waiting for a very long time to finally go to the beach. Finally the plane lands and Arthur boards the plane. On board he sees beautiful ocean and decides to stay there. After landing he spends the rest of the day relaxing by the water.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.6, 'repetition_penalty': 1.15}
Arthur goes to the beach. Arthur is on a vacation with his family. His family decides to go to the beach. They spend a lot of time at the beach. Arthur has a great day at the beach. He will never forget that trip!
Arthur goes to the beach. Arthur is bored on a rainy day at work. He decides he needs some fun time. He heads out to the ocean. At first Arthur does not like it. However, after a while he finds that the water is very relaxing.
Arthur goes to the beach. Arthur is bored on a Friday night. He decides he would like to go to the beach. He calls his friend and asks him if he wants to come with him. His friend agrees to take Arthur to the beach. They have a great time at the beach.
Arthur goes to the beach. Arthur loved the ocean. One day, he decided to go for a walk on the beach. He walked down the beach and saw many beautiful flowers. Then, he noticed a seagull flying overhead. Arthur went back home and told his mother about the bird.
Arthur goes to the beach. Arthur loved going to the beach. He had a lot of fun at the beach. One day, Arthur went to the beach and got sand in his eyes. Arthur realized that he was not wearing sunscreen. Arthur went home with red spots on his face from the sand.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.4, 'repetition_penalty': 1.2}
Arthur goes to the beach. Arthur was a very happy boy who loved going to the beach. One day, Arthur's mom told him she had an idea for him. She said that he could take his favorite toy and play in the ocean! He went to the beach with his favorite toy and played all day long. Now, Arthur loves the beach just as much as ever.
Arthur goes to the beach. Arthur was a very lazy boy who never did anything. One day his mom took him to the beach. He played in the water and sunbathed for hours. When it was time to go home, he went with his mother. His mom brought him back home and Arthur slept all day!
Arthur goes to the beach. Arthur is bored one day and decides he needs a vacation. He calls his friends up to go with him to the beach. They all agree that it would be fun to spend time together. When they get there, Arthur spends most of his time swimming. He had a great trip at the beach!
Arthur goes to the beach. Arthur is bored one day and decides to go to the beach. He gets his towel, sunscreen and some sunblock. When he arrives at the beach, it's very hot outside. Finally Arthur finds a spot on the sand that isn't so hot. Now Arthur can enjoy the rest of his day!
Arthur goes to the beach. Arthur is bored at home. He decides he needs a change of scenery. He calls his friend and asks if they can go to the beach. His friends agree to go with him. They spend the day playing in the ocean together.
|
mesolitica/pretrained-wav2vec2-mini-mixed | 748a6cae90a2c5ebccde0cf8983ecf7adcf4084b | 2022-06-15T16:24:17.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"pretraining",
"transformers",
"generated_from_keras_callback",
"model-index"
] | null | false | mesolitica | null | mesolitica/pretrained-wav2vec2-mini-mixed | 0 | null | transformers | 37,966 | ---
tags:
- generated_from_keras_callback
model-index:
- name: pretrained-wav2vec2-base-mixed
results: []
---
# pretrained-wav2vec2-small-mixed
Pretrained Wav2Vec2 MINI size on https://github.com/huseinzol05/malaya-speech/tree/master/data/mixed-stt, also included Tensorboard files in this repository.
This model was pretrained on 3 languages,
1. Malay
2. Singlish
3. Mandarin
**This model trained on a single RTX 3090 Ti 24GB VRAM, provided by https://mesolitica.com/**. |
huggingtweets/arthur_rimbaud | 706656644da43701418d6e2fbee0831ccfc7ab6a | 2022-06-07T13:46:36.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/arthur_rimbaud | 0 | null | transformers | 37,967 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/3077349437/46e19fdb6614ff10d09d353a07b75d60_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Arthur Rimbaud</div>
<div style="text-align: center; font-size: 14px;">@arthur_rimbaud</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Arthur Rimbaud.
| Data | Arthur Rimbaud |
| --- | --- |
| Tweets downloaded | 423 |
| Retweets | 49 |
| Short tweets | 6 |
| Tweets kept | 368 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1oytr5hf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @arthur_rimbaud's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1kk1xq6s) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1kk1xq6s/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/arthur_rimbaud')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
gloomyworm/DialoGPT-small-ortho | 67f15ac9eba0aa34c5b4c8e3707d71b63af4bfff | 2022-06-07T14:08:23.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | gloomyworm | null | gloomyworm/DialoGPT-small-ortho | 0 | null | transformers | 37,968 | ---
tags:
- conversational
---
# Ortho DialoGPT Model |
huggingtweets/mizefian | 4ec86f5f80a2c7528eb7c839525d08eca01d347f | 2022-06-07T16:10:44.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/mizefian | 0 | null | transformers | 37,969 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1488896240083517453/Bu0lDApj_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Mizefian 🇺🇦</div>
<div style="text-align: center; font-size: 14px;">@mizefian</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Mizefian 🇺🇦.
| Data | Mizefian 🇺🇦 |
| --- | --- |
| Tweets downloaded | 1265 |
| Retweets | 188 |
| Short tweets | 355 |
| Tweets kept | 722 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/x49ahgym/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mizefian's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/xdjgjn3p) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/xdjgjn3p/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mizefian')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
kozlovtsev/DialoGPT-medium-harrypotter | 1ffee129ac3995656c51452ad2b4041112d5b254 | 2022-06-07T18:33:56.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | kozlovtsev | null | kozlovtsev/DialoGPT-medium-harrypotter | 0 | null | transformers | 37,970 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
nestoralvaro/mt5-base-finetuned-xsum-data_prep_2021_12_26___t22027_162754.csv___topic_text_google_mt5_base | d3573612b410dc1fc0d7ecdc3082c08bd223eb4a | 2022-06-08T01:37:06.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | nestoralvaro | null | nestoralvaro/mt5-base-finetuned-xsum-data_prep_2021_12_26___t22027_162754.csv___topic_text_google_mt5_base | 0 | null | transformers | 37,971 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-finetuned-xsum-data_prep_2021_12_26___t22027_162754.csv___topic_text_google_mt5_base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-xsum-data_prep_2021_12_26___t22027_162754.csv___topic_text_google_mt5_base
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 0.7721
- Rouge2: 0.0698
- Rougel: 0.7711
- Rougelsum: 0.773
- Gen Len: 6.329
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.0 | 1.0 | 131773 | nan | 0.7721 | 0.0698 | 0.7711 | 0.773 | 6.329 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/jeanswayy | f1ae163f563797e2ff191ee622a9c5c191b610b4 | 2022-06-07T18:40:15.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/jeanswayy | 0 | null | transformers | 37,972 | ---
language: en
thumbnail: http://www.huggingtweets.com/jeanswayy/1654627123103/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1448289036171309068/LiGzmPgt_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">j e a n 🤷🏻♀️</div>
<div style="text-align: center; font-size: 14px;">@jeanswayy</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from j e a n 🤷🏻♀️.
| Data | j e a n 🤷🏻♀️ |
| --- | --- |
| Tweets downloaded | 2697 |
| Retweets | 1017 |
| Short tweets | 240 |
| Tweets kept | 1440 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/16duoq0d/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jeanswayy's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2ds4fwqc) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2ds4fwqc/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jeanswayy')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/irodori7 | 747027b1aa58d623611f8cb1bb2049d30e17bf48 | 2022-06-07T18:27:35.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/irodori7 | 0 | null | transformers | 37,973 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/948537441429803009/NgUotYet_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">たつき/irodori</div>
<div style="text-align: center; font-size: 14px;">@irodori7</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from たつき/irodori.
| Data | たつき/irodori |
| --- | --- |
| Tweets downloaded | 1494 |
| Retweets | 224 |
| Short tweets | 1087 |
| Tweets kept | 183 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2641xmb8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @irodori7's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3pehfpkr) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3pehfpkr/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/irodori7')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
jppaolim/v60_Large_2E | fdf018d9f0f76a7d8669301a09cdb50bdc3aeb2d | 2022-06-07T19:15:12.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | jppaolim | null | jppaolim/v60_Large_2E | 0 | null | transformers | 37,974 | # My Story model
{'top_p': 0.9, 'top_k': 50, 'temperature': 1, 'repetition_penalty': 1}
Arthur goes to the beach. Arthur is in his beach day. He decides to go to the beach. He gets out on the board. He puts on his swimsuit. He goes to the beach.
Arthur goes to the beach. Arthur is walking on the beach. He notices the water has gone very dirty. He gets out of his sand. He realizes that he should buy some new sand. He heads back to shore.
Arthur goes to the beach. Arthur always wanted to go to the beach. He always wished that he could go on the beach. One day he asked his dad for the beach trip. His dad agreed that he went to the beach. Arthur was so happy that he was going to the beach.
Arthur goes to the beach. Arthur went to the beach last week. His wife thought that he was going to the beach. She asked him to stop by and take a look at the ocean. Arthur said he was going to the ocean. His wife was not happy.
Arthur goes to the beach. Arthur goes to the beach. He needs some sand for his feet. He needs to get sand for his feet. Arthur gets sand from the sand beach. Arthur goes to the beach with sand in his feet.
{'top_p': 0.9, 'top_k': 50, 'temperature': 1, 'repetition_penalty': 1.05}
Arthur goes to the beach. Arthur wanted to go to the beach with his friends. His friends met up at the beach. They found out that it was too expensive for them to do. They then decided to buy a ticket to go by themselves. Arthur was not happy that he didn't get to go on the beach.
Arthur goes to the beach. Arthur is on vacation. He decides to go to the beach. He arrives at the beach. He spends his day swimming in the ocean. Arthur has a great time at the beach.
Arthur goes to the beach. Arthur was playing in the sand. He decided he wanted to go to the beach. The sand turned into muddy water. Arthur put his feet on the sand. He went back home.
Arthur goes to the beach. Arthur always loved to go to the beach. The ocean is Arthur's favorite place to go. He likes to eat on the beach. He wants to see his parents again this year. He takes his mom to the beach for his birthday.
Arthur goes to the beach. Arthur has always wanted to go to the beach. However, he has not ever been to the beach. Finally, one day he decides to go to the beach. He finally takes a nice day in the beach. Arthur is happy that he decided to go to the beach.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.8, 'repetition_penalty': 1.1}
Arthur goes to the beach. Arthur was looking for a job. He decided to go to the beach. The ocean was a good place for him. He loved the sun and sand. Arthur went to the beach for work.
Arthur goes to the beach. Arthur went to the beach for a day. He played in the water. He had a good time. He found out that it was really hot today. He put on sunscreen and went home.
Arthur goes to the beach. Arthur wants to go to the beach. He wants to have fun with his friends. He arrives at the beach and goes swimming. He spends all day playing in the ocean. Arthur is happy that he spent time at the beach.
Arthur goes to the beach. Arthur went to the beach with his family. His family wanted to go to the beach. Arthur got out of the water and realized he did not want to go. Arthur's family made fun of him because they want to go to the beach. Arthur was embarrassed by his family for being so indecisive.
Arthur goes to the beach. Arthur has been watching his friends go to the beach. He was not happy with it though. His friend didn't tell him that he is going too. So Arthur had to leave. The two went to the beach together.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.6, 'repetition_penalty': 1.15}
Arthur goes to the beach. Arthur was on a vacation in California. He decided he wanted to go to the beach. When Arthur arrived at the beach, it was very crowded. Arthur realized that there were not many people at the beach. Arthur went home and cried about it.
Arthur goes to the beach. Arthur is at the beach with his friends. He has been playing in the sand for hours. His friends tell him he needs to go home and get a tan. They all leave the beach together. Arthur feels sad that he hasn't gotten a tan.
Arthur goes to the beach. Arthur was walking down the beach with his girlfriend. They had decided to go for a swim in the ocean. While Arthur and his girlfriend were swimming, an alligator appeared on the shore. He tried to swim back but it was too hot for him. Arthur had to leave the beach without his girlfriend.
Arthur goes to the beach. Arthur was looking for a place to go to the beach. He went to the local store and bought some sunscreen. He put on his swimsuit and went out into the ocean. After he got out of the water, he felt very sunburned. Arthur had never been to the beach before so he decided to stay at home.
Arthur goes to the beach. Arthur was a young boy who wanted to go to the beach. He went to the beach and laid on his towel. Arthur had been so tired from playing with his friends. He decided to leave the beach to go home. Arthur was exhausted but happy he had made it to the beach.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.4, 'repetition_penalty': 1.2}
Arthur goes to the beach. Arthur is going on a trip with his friends. He has been looking for days and finally finds it. They have packed up all their gear and are ready to go. Arthur gets in the car and drives off into the ocean.
Arthur goes to the beach. Arthur is going to the beach with his friends. They decide that they want to go swimming in the ocean. When Arthur gets there, he realizes it's not a good day for him. He decides to stay at home and watch television instead. Arthur feels sad that he doesn't have fun at the beach.
Arthur goes to the beach. Arthur is going to the beach with his friends. His friends want him to go swimming in the ocean. They all agree that he should go swimming. He agrees and they leave for the beach. Arthur spends the day at the beach.
Arthur goes to the beach. Arthur is going on a trip with his friends. They are going to the beach. He has never been before so he doesn't know what to do. His friend tells him that he should go swimming. Arthur agrees and goes to the beach.
Arthur goes to the beach. Arthur is going to the beach with his family. He has been waiting for this day all year long. His mother tells him that he needs to get out of the sand. They go to the beach and Arthur gets sand in his eyes. He doesn't want to leave the beach but he does anyway.
|
mezes/finetuned-mt5 | 244dc258e0f3431f782d3cd840711fc3c9560bb4 | 2022-06-09T12:34:27.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mezes | null | mezes/finetuned-mt5 | 0 | null | transformers | 37,975 | Entry not found |
huggingtweets/jpegmafia | 248b3f2e9ef08384c3a1d7cc44802bbb93e7d7a2 | 2022-06-07T20:33:58.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/jpegmafia | 0 | null | transformers | 37,976 | ---
language: en
thumbnail: http://www.huggingtweets.com/jpegmafia/1654634032817/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1510648677995581453/13zowZ1f_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">JPEGMAFIA</div>
<div style="text-align: center; font-size: 14px;">@jpegmafia</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from JPEGMAFIA.
| Data | JPEGMAFIA |
| --- | --- |
| Tweets downloaded | 3114 |
| Retweets | 1181 |
| Short tweets | 495 |
| Tweets kept | 1438 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ub5q17i2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jpegmafia's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ihd6e39h) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ihd6e39h/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jpegmafia')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/bladeecity-lil_icebunny | 4c821e0c7f53d5be7e48f1978e8bb874d7cecc3b | 2022-06-07T20:42:03.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/bladeecity-lil_icebunny | 0 | null | transformers | 37,977 | ---
language: en
thumbnail: http://www.huggingtweets.com/bladeecity-lil_icebunny/1654634518665/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1194734625547010048/NB1V0fMb_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1501634135378391044/6FiRJ7RP_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">JAMES FERRARO & Aim Nothyng</div>
<div style="text-align: center; font-size: 14px;">@bladeecity-lil_icebunny</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from JAMES FERRARO & Aim Nothyng.
| Data | JAMES FERRARO | Aim Nothyng |
| --- | --- | --- |
| Tweets downloaded | 3184 | 1619 |
| Retweets | 167 | 321 |
| Short tweets | 926 | 492 |
| Tweets kept | 2091 | 806 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1iiufrfr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bladeecity-lil_icebunny's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1o094svv) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1o094svv/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/bladeecity-lil_icebunny')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
renjithks/layoutlmv1-cord-ner | adf0d7be43b73dce199f7bee9f04831a47bb6fc0 | 2022-06-07T20:59:30.000Z | [
"pytorch",
"tensorboard",
"layoutlm",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | renjithks | null | renjithks/layoutlmv1-cord-ner | 0 | null | transformers | 37,978 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv1-cord-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv1-cord-ner
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1438
- Precision: 0.9336
- Recall: 0.9453
- F1: 0.9394
- Accuracy: 0.9767
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 113 | 0.1251 | 0.9054 | 0.9184 | 0.9119 | 0.9651 |
| No log | 2.0 | 226 | 0.1343 | 0.9002 | 0.9261 | 0.9130 | 0.9635 |
| No log | 3.0 | 339 | 0.1264 | 0.9189 | 0.9357 | 0.9272 | 0.9647 |
| No log | 4.0 | 452 | 0.1235 | 0.9122 | 0.9376 | 0.9248 | 0.9681 |
| 0.1371 | 5.0 | 565 | 0.1353 | 0.9378 | 0.9405 | 0.9391 | 0.9717 |
| 0.1371 | 6.0 | 678 | 0.1431 | 0.9233 | 0.9357 | 0.9295 | 0.9709 |
| 0.1371 | 7.0 | 791 | 0.1473 | 0.9289 | 0.9405 | 0.9347 | 0.9759 |
| 0.1371 | 8.0 | 904 | 0.1407 | 0.9473 | 0.9491 | 0.9482 | 0.9784 |
| 0.0106 | 9.0 | 1017 | 0.1440 | 0.9301 | 0.9453 | 0.9376 | 0.9769 |
| 0.0106 | 10.0 | 1130 | 0.1438 | 0.9336 | 0.9453 | 0.9394 | 0.9767 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
huggingtweets/0pn-lil_icebunny | 8d880f6c3295bb2a0427bb16a127cc4deef0dbaf | 2022-06-07T20:49:32.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/0pn-lil_icebunny | 0 | null | transformers | 37,979 | ---
language: en
thumbnail: http://www.huggingtweets.com/0pn-lil_icebunny/1654634967211/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1331413261070307329/N7du8baD_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1194734625547010048/NB1V0fMb_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">oneohtrix point never & JAMES FERRARO</div>
<div style="text-align: center; font-size: 14px;">@0pn-lil_icebunny</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from oneohtrix point never & JAMES FERRARO.
| Data | oneohtrix point never | JAMES FERRARO |
| --- | --- | --- |
| Tweets downloaded | 1862 | 3184 |
| Retweets | 361 | 167 |
| Short tweets | 417 | 926 |
| Tweets kept | 1084 | 2091 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/btu8y5w7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @0pn-lil_icebunny's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2fg2ki8d) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2fg2ki8d/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/0pn-lil_icebunny')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
iNceptioN/dummy_model | 823e71670accf02de392c727bd19d3428f2a2952 | 2022-06-07T22:49:36.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | iNceptioN | null | iNceptioN/dummy_model | 0 | null | transformers | 37,980 | Modelo para completar tokens en francés |
lindsayng/t5-base-lindsaytest-bias | eb6feee9ef227e049278a42b72713b2ecccdc951 | 2022-06-07T22:51:12.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lindsayng | null | lindsayng/t5-base-lindsaytest-bias | 0 | null | transformers | 37,981 | Entry not found |
huggingtweets/dwr-elonmusk-maccaw | b630c3c45782c1cb4cf5d7fdc72d12e5ec235a4d | 2022-06-07T23:37:18.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/dwr-elonmusk-maccaw | 0 | null | transformers | 37,982 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1529956155937759233/Nyn1HZWF_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1418421541054918657/ng4Kyv5G_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1518670972559130624/-G9gNsOp_400x400.png')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & Alex MacCaw & Dan Romero</div>
<div style="text-align: center; font-size: 14px;">@dwr-elonmusk-maccaw</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & Alex MacCaw & Dan Romero.
| Data | Elon Musk | Alex MacCaw | Dan Romero |
| --- | --- | --- | --- |
| Tweets downloaded | 3200 | 3244 | 3126 |
| Retweets | 146 | 255 | 2 |
| Short tweets | 956 | 258 | 333 |
| Tweets kept | 2098 | 2731 | 2791 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3ritkn2s/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dwr-elonmusk-maccaw's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1o2qtjkw) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1o2qtjkw/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dwr-elonmusk-maccaw')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
jppaolim/v61_Large_2E | c38c348bb8068b36b802d6e7b9dd6eba8cd2bc4f | 2022-06-08T01:06:26.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | jppaolim | null | jppaolim/v61_Large_2E | 0 | null | transformers | 37,983 | # My Story model
{'top_p': 0.9, 'top_k': 50, 'temperature': 1, 'repetition_penalty': 1}
Arthur goes to the beach. Arthur is in his beach house. He decides to lay out. Arthur wants to lay out on the beach. He puts on his favorite sandals. Arthur lays on the beach.
Arthur goes to the beach. Arthur is walking on a beach. He notices a family enjoying the beach. He offers to swim with them. The family swims with him. Arthur and the family enjoy the beach.
Arthur goes to the beach. Arthur always had a lot of fun at the beach. One day his friends invite him to go swimming. Arthur accepts their invitation and agrees to go swimming. On the way to the beach Arthur gets into an argument with a boy. He leaves the beach disappointed but happy.
Arthur goes to the beach. Arthur has never been to the beach. His friends tell him about it and he decides to go. He parks his car, packs up his bags and walks to the beach. Arthur looks at the beach and begins to take pictures. He returns home and is very happy.
Arthur goes to the beach. Arthur is so tired of not seeing the sun. He finally decides to go the beach. He walks down the beach. He sees a large sandcastle and waves crashing. He is finally able to see the sun.
{'top_p': 0.9, 'top_k': 50, 'temperature': 1, 'repetition_penalty': 1.05}
Arthur goes to the beach. Arthur never liked the sand at the beach. He was sure it would make him ill. One day his friends convinced him to go to the beach. Once there, Arthur saw many beautiful shells on the beach. Arthur decided that he enjoyed going to the beach!
Arthur goes to the beach. Arthur loves going to the beach with his grandfather. Arthur's grandfather always brings his fishing pole. Today is Arthur's first time seeing his grandfather's fishing pole. He can't believe how much he loves his grandfather's fishing pole. Arthur can't wait for his grandfather's fishing pole next weekend.
Arthur goes to the beach. Arthur loves going to the beach. This weekend he goes for the first time. He decides he wants to go swimming. He finds a beautiful spot for his swimming excursion. He is very glad he went.
Arthur goes to the beach. It was a hot summer day. Arthur had forgotten his sunscreen and he was sweating profusely. He decided to take a dip in the ocean instead of staying inside. He laid on the sand and relaxed until it cooled off. Arthur was glad that he didn't go inside all day!
Arthur goes to the beach. Arthur was bored on a week long vacation. So he decided to head to the beach. He walked along the shore and jumped in the water. He jumped off and ran towards his friends. Arthur had so much fun on the beach that day.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.8, 'repetition_penalty': 1.1}
Arthur goes to the beach. One day Arthur was out on his boat in the ocean. He noticed a big wave coming at him from the north. He decided to swim to shore and waited for it to pass. When it did he jumped into the water. The waves were so large that Arthur drowned and never returned home.
Arthur goes to the beach. Arthur loves going to the beach. He usually stays at his house. One day, he decides he wants to go to the beach. He buys a new life preserver and sets off for the beach. Finally he finds the perfect spot on the sand and has fun.
Arthur goes to the beach. Arthur was a very athletic boy. He loved going to the beach and swimming. One day, he decided to take a swim in the ocean. He swam for hours and did not feel tired at all. Later that day, Arthur swam back to shore with his friends!
Arthur goes to the beach. Arthur wanted to go to the beach. He had never been before. He asked his friends if they would go with him. They all agreed and they went together. At the end of the day, Arthur felt much better about the trip.
Arthur goes to the beach. Arthur is feeling lonely at home. He decides he needs a way to make new friends. He decides to go to the beach. At the beach he meets some cool people. Arthur has made new friends at the beach.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.6, 'repetition_penalty': 1.15}
Arthur goes to the beach. One day Arthur went to the beach with his friends. He played in the sand for a while. Then he sat and watched the waves roll in. When it was time to go home, Arthur's friends all left him. Arthur decided that he would never go back to the beach.
Arthur goes to the beach. Arthur had always wanted to go to the beach. He finally saved up enough money for a trip to the beach. On his first day at the beach he got lost. The next day he found the beach and was very happy. He is now planning on going back every weekend.
Arthur goes to the beach. One day, Arthur decides he wants to go to the beach. He drives to the beach and takes a taxi to get there. When he gets there, he parks his car. Then, he walks around for a while. Finally, he enjoys the sunset at the beach.
Arthur goes to the beach. Arthur was on vacation in Florida. He decided to go to the beach. He saw a girl that he liked and went up to her. She said yes and they spent the day together. They ended up dating for three years!
Arthur goes to the beach. Arthur was going on a vacation. He needed a place to stay. The beach was his first choice. He found one nearby. It was perfect for him.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.4, 'repetition_penalty': 1.2}
Arthur goes to the beach. Arthur is a very adventurous boy who loves going to the ocean. He decides he wants to go swimming at the local pool. At the local pool, Arthur swims for hours in the water. Finally, it's time to get out of the pool and go home. Now Arthur has a great day at the beach!
Arthur goes to the beach. One day Arthur was on vacation in Florida. He decided he wanted to go to the beach. At first it seemed like a long trip but then he got there. There were so many beautiful beaches! Finally, after an hour of walking, he arrived at the beach.
Arthur goes to the beach. One day Arthur decided he wanted to go to the beach. He packed his surfboard and some sunscreen. Then he went out on the water. When he got there, it was very sunny. Arthur had a great time at the beach!
Arthur goes to the beach. Arthur is on vacation in Florida. He decides he wants to go to the beach. At the beach, Arthur sees a beautiful sunset. He enjoys his day at the beach. Arthur returns home happy that he went to the beach.
Arthur goes to the beach. Arthur is a very adventurous person. He decides that he wants to go to the beach. He packs his bag and leaves for the beach. At the beach, Arthur sees many beautiful beaches. Finally, Arthur returns home happy with his trip.
|
joshanashakya/500_mini_codebert_sourcecode_nmt_ja2pn_50E_5e-05LR | efc75c0a4146fdeb66d7f6af44e7484876ccdcf1 | 2022-06-08T01:25:20.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | joshanashakya | null | joshanashakya/500_mini_codebert_sourcecode_nmt_ja2pn_50E_5e-05LR | 0 | null | transformers | 37,984 | Entry not found |
huggingtweets/benny_thejet_11 | 8d5effaced6d4dc6c50764996e4c7d294f050f85 | 2022-06-08T02:50:27.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/benny_thejet_11 | 0 | null | transformers | 37,985 | ---
language: en
thumbnail: http://www.huggingtweets.com/benny_thejet_11/1654656621512/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1328273166599217152/TUO71Spk_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Benny “The Jet”</div>
<div style="text-align: center; font-size: 14px;">@benny_thejet_11</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Benny “The Jet”.
| Data | Benny “The Jet” |
| --- | --- |
| Tweets downloaded | 338 |
| Retweets | 24 |
| Short tweets | 53 |
| Tweets kept | 261 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2dvxsn3h/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @benny_thejet_11's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3b7y2vf9) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3b7y2vf9/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/benny_thejet_11')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
nice/wav2vec2-base-timit-demo-google-colab | e812af806ce1d6c98105e1e8bf690b5387b150b0 | 2022-06-08T05:29:21.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | nice | null | nice/wav2vec2-base-timit-demo-google-colab | 0 | 1 | transformers | 37,986 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5155
- Wer: 0.3388
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5822 | 1.0 | 500 | 2.4127 | 1.0 |
| 0.9838 | 2.01 | 1000 | 0.5401 | 0.5363 |
| 0.4308 | 3.01 | 1500 | 0.4380 | 0.4592 |
| 0.3086 | 4.02 | 2000 | 0.4409 | 0.4503 |
| 0.2324 | 5.02 | 2500 | 0.4148 | 0.4041 |
| 0.202 | 6.02 | 3000 | 0.4214 | 0.3882 |
| 0.1595 | 7.03 | 3500 | 0.4489 | 0.3875 |
| 0.1383 | 8.03 | 4000 | 0.4225 | 0.3858 |
| 0.1246 | 9.04 | 4500 | 0.4512 | 0.3846 |
| 0.104 | 10.04 | 5000 | 0.4676 | 0.3875 |
| 0.0949 | 11.04 | 5500 | 0.4389 | 0.3683 |
| 0.0899 | 12.05 | 6000 | 0.4964 | 0.3803 |
| 0.0854 | 13.05 | 6500 | 0.5397 | 0.3798 |
| 0.0728 | 14.06 | 7000 | 0.4823 | 0.3666 |
| 0.065 | 15.06 | 7500 | 0.5187 | 0.3648 |
| 0.0573 | 16.06 | 8000 | 0.5378 | 0.3715 |
| 0.0546 | 17.07 | 8500 | 0.5239 | 0.3705 |
| 0.0573 | 18.07 | 9000 | 0.5094 | 0.3554 |
| 0.0478 | 19.08 | 9500 | 0.5334 | 0.3657 |
| 0.0673 | 20.08 | 10000 | 0.5300 | 0.3528 |
| 0.0434 | 21.08 | 10500 | 0.5314 | 0.3528 |
| 0.0363 | 22.09 | 11000 | 0.5540 | 0.3512 |
| 0.0326 | 23.09 | 11500 | 0.5514 | 0.3510 |
| 0.0332 | 24.1 | 12000 | 0.5439 | 0.3492 |
| 0.0275 | 25.1 | 12500 | 0.5273 | 0.3432 |
| 0.0267 | 26.1 | 13000 | 0.5068 | 0.3430 |
| 0.0243 | 27.11 | 13500 | 0.5131 | 0.3388 |
| 0.0228 | 28.11 | 14000 | 0.5247 | 0.3406 |
| 0.0227 | 29.12 | 14500 | 0.5155 | 0.3388 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
joshanashakya/mini_codebert_sourcecode_nmt_ja2pn_50E_5e-05LR | b9d4dbc167d9e554a2e2498255a003773ef5a75f | 2022-06-08T03:31:46.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | joshanashakya | null | joshanashakya/mini_codebert_sourcecode_nmt_ja2pn_50E_5e-05LR | 0 | null | transformers | 37,987 | Entry not found |
joshanashakya/mini_codebert_sourcecode_nmt_pn2ja_50E_5e-05LR | 7192de0eb9e8f77234e30ed51b19ce624bd26e54 | 2022-06-08T03:32:37.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | joshanashakya | null | joshanashakya/mini_codebert_sourcecode_nmt_pn2ja_50E_5e-05LR | 0 | null | transformers | 37,988 | Entry not found |
huggingtweets/vufewequ | dd7b08aac99cfb43f217df457a12fe2b69936d8d | 2022-06-08T03:59:36.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/vufewequ | 0 | null | transformers | 37,989 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1350929535454359558/lWAfxbn4_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Vu Fewequ</div>
<div style="text-align: center; font-size: 14px;">@vufewequ</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Vu Fewequ.
| Data | Vu Fewequ |
| --- | --- |
| Tweets downloaded | 175 |
| Retweets | 60 |
| Short tweets | 5 |
| Tweets kept | 110 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3d6nz5jt/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @vufewequ's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1psyqthq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1psyqthq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/vufewequ')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
steven123/teeth_verify | e34f036a4aadda5125d965bca62eee302de267ad | 2022-06-08T04:02:20.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | steven123 | null | steven123/teeth_verify | 0 | null | transformers | 37,990 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: teeth_verify
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.6666666865348816
---
# teeth_verify
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Good Teeth
![Good Teeth](images/Good_Teeth.jpg)
#### Missing Teeth
![Missing Teeth](images/Missing_Teeth.jpg)
#### Rotten Teeth
![Rotten Teeth](images/Rotten_Teeth.jpg) |
joshanashakya/mini_codebert_sourcecode_nmt_pn2ja_100E_5e-05LR | 068374cc37c870511f2093ce03e5a5c4c05b0480 | 2022-06-08T04:12:19.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | joshanashakya | null | joshanashakya/mini_codebert_sourcecode_nmt_pn2ja_100E_5e-05LR | 0 | null | transformers | 37,991 | Entry not found |
joshanashakya/mini_codebert_sourcecode_nmt_ja2pn_100E_5e-05LR | 9842609ba331091e85b1f792011d7652ccd80761 | 2022-06-08T04:12:37.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | joshanashakya | null | joshanashakya/mini_codebert_sourcecode_nmt_ja2pn_100E_5e-05LR | 0 | null | transformers | 37,992 | Entry not found |
huggingtweets/gnu_amir | 2ca1cbc2a68f4dc6abab714a93e78fb1f9160772 | 2022-06-08T05:23:47.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/gnu_amir | 0 | null | transformers | 37,993 | ---
language: en
thumbnail: http://www.huggingtweets.com/gnu_amir/1654665822752/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1524432360678342656/TVb29KZ0_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ژوپیتر - Amirhossein</div>
<div style="text-align: center; font-size: 14px;">@gnu_amir</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ژوپیتر - Amirhossein.
| Data | ژوپیتر - Amirhossein |
| --- | --- |
| Tweets downloaded | 3225 |
| Retweets | 360 |
| Short tweets | 485 |
| Tweets kept | 2380 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/17lh3jzt/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gnu_amir's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2hzkc54t) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2hzkc54t/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gnu_amir')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/qiamast | 974fb9fafb66b0904fc1abea70bef128092357b0 | 2022-06-08T05:42:10.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/qiamast | 0 | null | transformers | 37,994 | ---
language: en
thumbnail: http://www.huggingtweets.com/qiamast/1654666925668/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1515664770996715524/UJ44tEP7_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Mahdi🪐</div>
<div style="text-align: center; font-size: 14px;">@qiamast</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Mahdi🪐.
| Data | Mahdi🪐 |
| --- | --- |
| Tweets downloaded | 1183 |
| Retweets | 17 |
| Short tweets | 101 |
| Tweets kept | 1065 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/t2yplvw1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @qiamast's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2oiurss1) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2oiurss1/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/qiamast')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
nestoralvaro/mt5-base-finetuned-xsum-data_prep_2021_12_26___t1_162754.csv___topic_text_google_mt5_base | 07464bb4179f2059da856d05e5542ab7d2e2403f | 2022-06-09T04:30:48.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | nestoralvaro | null | nestoralvaro/mt5-base-finetuned-xsum-data_prep_2021_12_26___t1_162754.csv___topic_text_google_mt5_base | 0 | null | transformers | 37,995 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-finetuned-xsum-data_prep_2021_12_26___t1_162754.csv___topic_text_google_mt5_base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-xsum-data_prep_2021_12_26___t1_162754.csv___topic_text_google_mt5_base
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 0.8027
- Rouge2: 0.0915
- Rougel: 0.802
- Rougelsum: 0.8026
- Gen Len: 6.3401
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.0 | 1.0 | 276732 | nan | 0.8027 | 0.0915 | 0.802 | 0.8026 | 6.3401 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
joshanashakya/codebert_sourcecode_nmt_pn2ja_50E_5e-05LR | db56fdd62ed67d263ded7b75ca0cd3285d294c5f | 2022-06-08T06:16:41.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | joshanashakya | null | joshanashakya/codebert_sourcecode_nmt_pn2ja_50E_5e-05LR | 0 | null | transformers | 37,996 | Entry not found |
joshanashakya/codebert_sourcecode_nmt_ja2pn_50E_5e-05LR | 9b65756814caa388f2fcc8cbc1bc67fdabed378d | 2022-06-08T06:46:44.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | joshanashakya | null | joshanashakya/codebert_sourcecode_nmt_ja2pn_50E_5e-05LR | 0 | null | transformers | 37,997 | Entry not found |
larryboy825/distilbert-base-uncased-finetuned-imdb | 37398bfbe33e8572ffa275afeeea83623a0b1819 | 2022-06-08T07:32:12.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | larryboy825 | null | larryboy825/distilbert-base-uncased-finetuned-imdb | 0 | null | transformers | 37,998 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0021
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.6836 | 1.0 | 2 | 3.3110 |
| 3.9035 | 2.0 | 4 | 3.2560 |
| 3.9928 | 3.0 | 6 | 2.4306 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
larryboy825/distilbert-base-uncased-finetuned-imdb-accelerate | 7a5c2fbdf321a8af09e358979f3db888ddfbe98c | 2022-06-08T07:39:10.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | larryboy825 | null | larryboy825/distilbert-base-uncased-finetuned-imdb-accelerate | 0 | null | transformers | 37,999 | Entry not found |