modelId
stringlengths
4
112
sha
stringlengths
40
40
lastModified
stringlengths
24
24
tags
sequence
pipeline_tag
stringclasses
29 values
private
bool
1 class
author
stringlengths
2
38
config
null
id
stringlengths
4
112
downloads
float64
0
36.8M
likes
float64
0
712
library_name
stringclasses
17 values
__index_level_0__
int64
0
38.5k
readme
stringlengths
0
186k
lindsayng/t5-base-base-fulltrainingset-bias
18f4bba1050abeb9947332647a6dc858e7553e5a
2022-06-08T08:13:42.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
lindsayng
null
lindsayng/t5-base-base-fulltrainingset-bias
0
null
transformers
38,000
Entry not found
bmichele/poetry-generation-firstline-mbart-ws-sv-capitalized
74019ab5998737e8d8f2faab54fc2dfbde20cc62
2022-06-08T08:59:47.000Z
[ "pytorch" ]
null
false
bmichele
null
bmichele/poetry-generation-firstline-mbart-ws-sv-capitalized
0
null
null
38,001
Entry not found
jppaolim/v62_Large_2E
4298a27b0b0e922c70afe5e0b6a11f9ccb8d102c
2022-06-08T11:30:57.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
jppaolim
null
jppaolim/v62_Large_2E
0
null
transformers
38,002
# My Story model {'top_p': 0.9, 'top_k': 50, 'temperature': 1, 'repetition_penalty': 1} Arthur goes to the beach. Arthur is in his beach chair. He is walking along the beach when he starts feeling a pain in his back. Arthur rushes to the doctor. The doctor says he needs a special cast. Arthur is so relieved he tears up. Arthur goes to the beach. Arthur takes his wife to the beach. Arthur's wife has sore muscles. He takes her to the local doctor. The doctor gives her medicine. Arthur and his wife enjoy the beach. Arthur goes to the beach. Arthur always wished he could go to the beach. He always wanted to go by himself. This time he went with his family. When they arrived the place was busy. Arthur was happy to be able to go to the beach. Arthur goes to the beach. Arthur has never been to the beach. When he finally gets there it is very hot. He decides to go to the beach. He enjoys his vacation at the beach. Arthur is happy that he has never been to the beach. Arthur goes to the beach. Arthur goes to the beach. He walks up the beach. He sits on the sand. Arthur lies on the beach. He falls asleep. {'top_p': 0.9, 'top_k': 50, 'temperature': 1, 'repetition_penalty': 1.05} Arthur goes to the beach. Arthur wants to get sandals. He buys a sandal and a pair of sandals. He gets in the water and does some stretching. He finds a few nice waves to surf. After surfing, he is able to buy some new sandals. Arthur goes to the beach. Arthur has been playing in the sand all day long. He decides to go swimming. He spends all afternoon in the water. Finally, Arthur heads home. Arthur finally has a fun beach day! Arthur goes to the beach. Arthur is very happy on vacation. However, Arthur is not excited to be on the beach. He has no idea how to swim. When he finally gets his board out, Arthur begins to get excited. Arthur swam his first time at the beach! Arthur goes to the beach. Arthur decided to go to the beach one day. He was very excited for it and got in the water. He was afraid of the waves and never went in. When he finally got out, he saw that he had gotten bit. Arthur still cried the rest of the day and went home. Arthur goes to the beach. Arthur was going to the beach with his girlfriend. Suddenly, he got a text from his girl that they were getting married. Arthur was so excited and was excited for the big day. When he saw the beach, he saw a beautiful woman. He went on the beach to thank her. {'top_p': 0.9, 'top_k': 40, 'temperature': 0.8, 'repetition_penalty': 1.1} Arthur goes to the beach. Arthur decides he wants to go swimming. He buys his favorite swimsuit and head for the water. As Arthur is about to enter the water, his friends show up. They tell Arthur that he has to pay for the day's swimwear. Arthur still feels guilty, but he doesn't want to be rude. Arthur goes to the beach. Arthur wants to go to the beach. He decides he has to get a job. He goes to work and gets hired. The next day he leaves his house. Arthur returns home and is happy. Arthur goes to the beach. Arthur is out with friends. He decides to go to the beach for a swim. At first he doesn't like the water. Then his friends make fun of him for being so skinny. Arthur finally decides to go swimming after all. Arthur goes to the beach. Arthur has never been to a beach before. He decides to go anyway. On his first day at the beach he gets seasick. He doesn't get any sun on his first day. Afterwards Arthur decides to not go to the beach for another year. Arthur goes to the beach. Arthur is going to the beach for a swim. He has never been to the beach before. As he is taking his first swim, a wave hits him in the head. His mother rushes over and tells him that he got hit by a wave. Arthur is glad he didn't go swimming. {'top_p': 0.9, 'top_k': 40, 'temperature': 0.6, 'repetition_penalty': 1.15} Arthur goes to the beach. It was Arthur's first time going to the beach. He went by himself and didn't know anyone. The water was very cold. After a few minutes, he decided to join a group of people. They had fun at the beach. Arthur goes to the beach. He decides to go for a swim at the beach. The water is very warm and Arthur feels very comfortable in his bathing suit. Suddenly, he notices something strange on the shore. It turns out that someone has been swimming there all day! Arthur is relieved when he finds out who it was. Arthur goes to the beach. Arthur is on a vacation with his family. They go to the beach and swim in the ocean. A shark jumps out at Arthur. He throws the water over it and it gets scared. Arthur decides not to go back to the beach for another year. Arthur goes to the beach. He wants to go for a swim. He doesn't want to get wet. He decides to use a towel. The towel gets soaked and he has to walk home. Arthur is glad he took his time. Arthur goes to the beach. Arthur is on vacation in Hawaii. He decides he wants to go surfing. Arthur takes a day off of work and heads out for the day. When Arthur gets there, he sees that it's very crowded! Arthur decides not to go back home for another two days. {'top_p': 0.9, 'top_k': 40, 'temperature': 0.4, 'repetition_penalty': 1.2} Arthur goes to the beach. Arthur is on a trip with his family. He decides to go for a swim at the ocean. The water is very cold and Arthur feels very hot. His parents take him back home. They tell him that he should have stayed in the house. Arthur goes to the beach. He decides he wants a vacation. He buys his ticket and flies out of town. When he arrives, he is surprised by how beautiful it was! The weather was perfect for him as well. He enjoyed himself immensely at the beach. Arthur goes to the beach. He decides he wants a nice day on the sand. The sun is shining and it's very hot. His friends come over to play with him. They all have fun playing in the water. Arthur feels much better after his day of fun. Arthur goes to the beach. Arthur is on vacation in Florida. He decides he wants to go to the beach. His friends tell him they can't make it for a few days. Arthur agrees and heads out with his friends. They all have fun at the beach. Arthur goes to the beach. He is going for a swim in the ocean. The water was very cold and Arthur didn't want to go. His friends convinced him to go anyway. When he got there, it was freezing! But he still went because he wanted to be with his friends.
joshanashakya/codebert_sourcecode_nmt_ja2pn_100E_5e-05LR
b03345f1ab51f109b52e1f16c7260306e1dffbe5
2022-06-08T10:47:20.000Z
[ "pytorch", "encoder-decoder", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
joshanashakya
null
joshanashakya/codebert_sourcecode_nmt_ja2pn_100E_5e-05LR
0
null
transformers
38,003
Entry not found
joshanashakya/codebert_sourcecode_nmt_pn2ja_50E_2e-05LR
d9e5eb1ad82b97d52bdbe3e0c2ff8d67d72a13c2
2022-06-08T11:52:34.000Z
[ "pytorch", "encoder-decoder", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
joshanashakya
null
joshanashakya/codebert_sourcecode_nmt_pn2ja_50E_2e-05LR
0
null
transformers
38,004
Entry not found
joshanashakya/mini_codebert_sourcecode_nmt_pn2ja_50E_2e-05LR
54aef5c719b593249d432cfe4a685f07042dacc4
2022-06-08T12:52:18.000Z
[ "pytorch", "encoder-decoder", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
joshanashakya
null
joshanashakya/mini_codebert_sourcecode_nmt_pn2ja_50E_2e-05LR
0
null
transformers
38,005
Entry not found
huggingtweets/elukkaj
a9b81bc0cf9935671b1839b4ecf0e39e4127bc31
2022-06-08T14:01:41.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/elukkaj
0
null
transformers
38,006
--- language: en thumbnail: http://www.huggingtweets.com/elukkaj/1654696881260/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/996279759570169856/vqZiiVns_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Elukka</div> <div style="text-align: center; font-size: 14px;">@elukkaj</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Elukka. | Data | Elukka | | --- | --- | | Tweets downloaded | 1113 | | Retweets | 1 | | Short tweets | 22 | | Tweets kept | 1090 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3de86afj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elukkaj's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/scw34f55) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/scw34f55/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/elukkaj') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
joshanashakya/codebert_sourcecode_nmt_pn2ja_100E_5e-05LR
5cc10270844f5941c2519db313d4cb6a423405b1
2022-06-08T14:40:04.000Z
[ "pytorch", "encoder-decoder", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
joshanashakya
null
joshanashakya/codebert_sourcecode_nmt_pn2ja_100E_5e-05LR
0
null
transformers
38,007
Entry not found
huggingtweets/ripvillage
e7a5cbc6c9cf10e1c135e3837aedb88df28917fc
2022-06-08T16:38:52.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/ripvillage
0
null
transformers
38,008
--- language: en thumbnail: http://www.huggingtweets.com/ripvillage/1654706327179/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/378800000120011180/ffb093c084cfb4b60f70488a7e6355d0_400x400.jpeg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Mathurin Village</div> <div style="text-align: center; font-size: 14px;">@ripvillage</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Mathurin Village. | Data | Mathurin Village | | --- | --- | | Tweets downloaded | 3243 | | Retweets | 118 | | Short tweets | 335 | | Tweets kept | 2790 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3e20ev2s/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ripvillage's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ecq32lhi) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ecq32lhi/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/ripvillage') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
renjithks/layoutlmv1-er-ner
c89b838fcc64748bd2e462fb0062f6fed615da00
2022-06-08T18:53:25.000Z
[ "pytorch", "tensorboard", "layoutlm", "token-classification", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
token-classification
false
renjithks
null
renjithks/layoutlmv1-er-ner
0
null
transformers
38,009
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: layoutlmv1-er-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlmv1-er-ner This model is a fine-tuned version of [renjithks/layoutlmv1-cord-ner](https://huggingface.co/renjithks/layoutlmv1-cord-ner) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2092 - Precision: 0.7202 - Recall: 0.7238 - F1: 0.7220 - Accuracy: 0.9639 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 41 | 0.2444 | 0.4045 | 0.3996 | 0.4020 | 0.9226 | | No log | 2.0 | 82 | 0.1640 | 0.5319 | 0.6098 | 0.5682 | 0.9455 | | No log | 3.0 | 123 | 0.1531 | 0.6324 | 0.6614 | 0.6466 | 0.9578 | | No log | 4.0 | 164 | 0.1440 | 0.6927 | 0.6743 | 0.6834 | 0.9620 | | No log | 5.0 | 205 | 0.1520 | 0.6750 | 0.6958 | 0.6853 | 0.9613 | | No log | 6.0 | 246 | 0.1597 | 0.6840 | 0.6987 | 0.6913 | 0.9605 | | No log | 7.0 | 287 | 0.1910 | 0.7002 | 0.6887 | 0.6944 | 0.9605 | | No log | 8.0 | 328 | 0.1860 | 0.6834 | 0.6923 | 0.6878 | 0.9609 | | No log | 9.0 | 369 | 0.1665 | 0.6785 | 0.7102 | 0.6940 | 0.9624 | | No log | 10.0 | 410 | 0.1816 | 0.7016 | 0.7052 | 0.7034 | 0.9624 | | No log | 11.0 | 451 | 0.1808 | 0.6913 | 0.7166 | 0.7038 | 0.9638 | | No log | 12.0 | 492 | 0.2165 | 0.712 | 0.7023 | 0.7071 | 0.9628 | | 0.1014 | 13.0 | 533 | 0.2135 | 0.6979 | 0.7109 | 0.7043 | 0.9613 | | 0.1014 | 14.0 | 574 | 0.2154 | 0.6906 | 0.7109 | 0.7006 | 0.9612 | | 0.1014 | 15.0 | 615 | 0.2118 | 0.6902 | 0.7016 | 0.6958 | 0.9615 | | 0.1014 | 16.0 | 656 | 0.2091 | 0.6985 | 0.7080 | 0.7032 | 0.9623 | | 0.1014 | 17.0 | 697 | 0.2104 | 0.7118 | 0.7123 | 0.7121 | 0.9630 | | 0.1014 | 18.0 | 738 | 0.2081 | 0.7129 | 0.7231 | 0.7179 | 0.9638 | | 0.1014 | 19.0 | 779 | 0.2093 | 0.7205 | 0.7231 | 0.7218 | 0.9638 | | 0.1014 | 20.0 | 820 | 0.2092 | 0.7202 | 0.7238 | 0.7220 | 0.9639 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
tclong/wav2vec2-base-vios-commonvoice
fe5be39cb00ecd3e487ddd6b829f0442ba6a04d0
2022-06-09T17:17:08.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
tclong
null
tclong/wav2vec2-base-vios-commonvoice
0
null
transformers
38,010
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-vios-commonvoice results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-vios-commonvoice This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3823 - Wer: 0.2401 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.2268 | 0.66 | 500 | 0.8746 | 0.5939 | | 0.8728 | 1.32 | 1000 | 0.6435 | 0.4554 | | 0.6899 | 1.99 | 1500 | 0.5655 | 0.3995 | | 0.5842 | 2.65 | 2000 | 0.5267 | 0.3694 | | 0.5371 | 3.31 | 2500 | 0.4980 | 0.3431 | | 0.4921 | 3.97 | 3000 | 0.4781 | 0.3276 | | 0.4508 | 4.64 | 3500 | 0.4434 | 0.3134 | | 0.433 | 5.3 | 4000 | 0.4348 | 0.2963 | | 0.404 | 5.96 | 4500 | 0.4248 | 0.2874 | | 0.3834 | 6.62 | 5000 | 0.4163 | 0.2775 | | 0.3784 | 7.28 | 5500 | 0.4104 | 0.2751 | | 0.3669 | 7.95 | 6000 | 0.4143 | 0.2724 | | 0.3462 | 8.61 | 6500 | 0.4131 | 0.2699 | | 0.3364 | 9.27 | 7000 | 0.4070 | 0.2617 | | 0.3249 | 9.93 | 7500 | 0.4076 | 0.2603 | | 0.3154 | 10.6 | 8000 | 0.3998 | 0.2577 | | 0.3117 | 11.26 | 8500 | 0.3930 | 0.2505 | | 0.3101 | 11.92 | 9000 | 0.4003 | 0.2492 | | 0.298 | 12.58 | 9500 | 0.3960 | 0.2496 | | 0.2968 | 13.24 | 10000 | 0.3877 | 0.2469 | | 0.29 | 13.91 | 10500 | 0.3870 | 0.2456 | | 0.2921 | 14.57 | 11000 | 0.3823 | 0.2401 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
nestoralvaro/mt5-base-finetuned-xsum-data_prep_2021_12_26___t2981_22026.csv___topic_text_google_mt5_base
2373a30877b780069e78428740a86d3114976556
2022-06-09T03:43:43.000Z
[ "pytorch", "tensorboard", "mt5", "text2text-generation", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
text2text-generation
false
nestoralvaro
null
nestoralvaro/mt5-base-finetuned-xsum-data_prep_2021_12_26___t2981_22026.csv___topic_text_google_mt5_base
0
null
transformers
38,011
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: mt5-base-finetuned-xsum-data_prep_2021_12_26___t2981_22026.csv___topic_text_google_mt5_base results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-base-finetuned-xsum-data_prep_2021_12_26___t2981_22026.csv___topic_text_google_mt5_base This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: nan - Rouge1: 0.7181 - Rouge2: 0.1008 - Rougel: 0.7173 - Rougelsum: 0.7187 - Gen Len: 6.2965 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:------:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 0.0 | 1.0 | 139904 | nan | 0.7181 | 0.1008 | 0.7173 | 0.7187 | 6.2965 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
huggingtweets/kentcdodds-richardbranson-sikiraamer
101330cbd791bee6875911826d4fd32d4d8c7172
2022-06-08T21:08:46.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/kentcdodds-richardbranson-sikiraamer
0
null
transformers
38,012
--- language: en thumbnail: http://www.huggingtweets.com/kentcdodds-richardbranson-sikiraamer/1654722520391/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1496777835062648833/3Ao6Xb2a_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1529905780542959616/Ibwrp7VJ_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1410740591483293697/tRbW1XoV_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Amer Sikira & Kent C. Dodds 💿 & Richard Branson</div> <div style="text-align: center; font-size: 14px;">@kentcdodds-richardbranson-sikiraamer</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Amer Sikira & Kent C. Dodds 💿 & Richard Branson. | Data | Amer Sikira | Kent C. Dodds 💿 | Richard Branson | | --- | --- | --- | --- | | Tweets downloaded | 3250 | 3249 | 3215 | | Retweets | 94 | 578 | 234 | | Short tweets | 214 | 507 | 96 | | Tweets kept | 2942 | 2164 | 2885 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/jtwa65l2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @kentcdodds-richardbranson-sikiraamer's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3vt6qlgf) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3vt6qlgf/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/kentcdodds-richardbranson-sikiraamer') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/mephytis
13c41c7673cbca0b4512dffd5829a455bb68113d
2022-06-08T22:50:52.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/mephytis
0
null
transformers
38,013
--- language: en thumbnail: http://www.huggingtweets.com/mephytis/1654728647738/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1516396570639573002/4WWU_e38_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">mephy✨</div> <div style="text-align: center; font-size: 14px;">@mephytis</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from mephy✨. | Data | mephy✨ | | --- | --- | | Tweets downloaded | 2959 | | Retweets | 322 | | Short tweets | 737 | | Tweets kept | 1900 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/6sao13mv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mephytis's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/29ayegfb) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/29ayegfb/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/mephytis') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/oddapt
ba3500c4b2fb86873260273c0d7978963b5fda8b
2022-06-09T00:08:44.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/oddapt
0
null
transformers
38,014
--- language: en thumbnail: http://www.huggingtweets.com/oddapt/1654733319638/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1468077034169458690/gt5Iv_y7_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Steve Hoyt</div> <div style="text-align: center; font-size: 14px;">@oddapt</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Steve Hoyt. | Data | Steve Hoyt | | --- | --- | | Tweets downloaded | 2861 | | Retweets | 615 | | Short tweets | 192 | | Tweets kept | 2054 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/8pfy3hb1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @oddapt's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3fphl051) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3fphl051/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/oddapt') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/killthenoise
0f6f585399d5a2e7bd5006136f753b2126c114ef
2022-06-09T03:35:18.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/killthenoise
0
null
transformers
38,015
--- language: en thumbnail: http://www.huggingtweets.com/killthenoise/1654745713334/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1531744995463507968/fPvkX5FS_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">ᵏᵗⁿ</div> <div style="text-align: center; font-size: 14px;">@killthenoise</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from ᵏᵗⁿ. | Data | ᵏᵗⁿ | | --- | --- | | Tweets downloaded | 3245 | | Retweets | 307 | | Short tweets | 645 | | Tweets kept | 2293 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3kjftyff/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @killthenoise's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2wij7d8z) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2wij7d8z/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/killthenoise') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/itsnovaherev2
006aa2e9f03037c2b294fcac45e719fde3821c39
2022-06-09T03:53:35.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/itsnovaherev2
0
null
transformers
38,016
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1253734967923798018/FJ7AvxLN_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">ItsNovaHere</div> <div style="text-align: center; font-size: 14px;">@itsnovaherev2</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from ItsNovaHere. | Data | ItsNovaHere | | --- | --- | | Tweets downloaded | 588 | | Retweets | 409 | | Short tweets | 67 | | Tweets kept | 112 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2tz4bf7d/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @itsnovaherev2's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/35es3xf7) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/35es3xf7/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/itsnovaherev2') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
thaonh/vietnews-summarization
623eeaa9df028d16e6ee8502f63c989751ce612a
2022-06-09T04:20:07.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
thaonh
null
thaonh/vietnews-summarization
0
null
transformers
38,017
Entry not found
huggingtweets/usao926
c652e9710f854df135b362fe949d6964b0079192
2022-06-09T03:57:49.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/usao926
0
null
transformers
38,018
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1329004510161694722/DkD9DvBN_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">USAO@山奥</div> <div style="text-align: center; font-size: 14px;">@usao926</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from USAO@山奥. | Data | USAO@山奥 | | --- | --- | | Tweets downloaded | 3249 | | Retweets | 1041 | | Short tweets | 1987 | | Tweets kept | 221 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/21po1181/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @usao926's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2jl5e9yl) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2jl5e9yl/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/usao926') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
joshanashakya/codebert_sourcecode_nmt_ja2pn_50E_2e-05LR_16B
6ffd725f635b6061ede4b436fbb64e22e7b02396
2022-06-09T04:01:10.000Z
[ "pytorch", "encoder-decoder", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
joshanashakya
null
joshanashakya/codebert_sourcecode_nmt_ja2pn_50E_2e-05LR_16B
0
null
transformers
38,019
Entry not found
joshanashakya/codebert_sourcecode_nmt_pn2ja_50E_1e-05LR
7f4681385e929c8e5826926a5e441820a3d2ddc3
2022-06-09T04:06:15.000Z
[ "pytorch", "encoder-decoder", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
joshanashakya
null
joshanashakya/codebert_sourcecode_nmt_pn2ja_50E_1e-05LR
0
null
transformers
38,020
Entry not found
nestoralvaro/mt5-base-finetuned-xsum-data_prep_2021_12_26___t8_54.csv___topic_text_google_mt5_base
5e0961c172d3c0d3a7313b99a32ec9e9004a1d42
2022-06-09T06:59:53.000Z
[ "pytorch", "tensorboard", "mt5", "text2text-generation", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
text2text-generation
false
nestoralvaro
null
nestoralvaro/mt5-base-finetuned-xsum-data_prep_2021_12_26___t8_54.csv___topic_text_google_mt5_base
0
null
transformers
38,021
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: mt5-base-finetuned-xsum-data_prep_2021_12_26___t8_54.csv___topic_text_google_mt5_base results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-base-finetuned-xsum-data_prep_2021_12_26___t8_54.csv___topic_text_google_mt5_base This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: nan - Rouge1: 1.4678 - Rouge2: 0.1841 - Rougel: 1.4748 - Rougelsum: 1.4701 - Gen Len: 6.4874 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 0.0 | 1.0 | 10645 | nan | 1.4678 | 0.1841 | 1.4748 | 1.4701 | 6.4874 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
huggingtweets/osanseviero
185ce00fe89c9172b9739467a89b186cf2d19102
2022-06-09T10:20:54.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/osanseviero
0
1
transformers
38,022
--- language: en thumbnail: http://www.huggingtweets.com/osanseviero/1654769951427/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1106315906165157889/0Hxb1ESL_400x400.png&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Omar Sanseviero</div> <div style="text-align: center; font-size: 14px;">@osanseviero</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Omar Sanseviero. | Data | Omar Sanseviero | | --- | --- | | Tweets downloaded | 3244 | | Retweets | 1158 | | Short tweets | 224 | | Tweets kept | 1862 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/29bkab0t/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @osanseviero's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1s35jikq) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1s35jikq/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/osanseviero') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
ghadeermobasher/WLT-BioBERT-BC5CDR-Disease
97becc535f49250a40b43821f30bc80c761775fe
2022-06-09T10:52:17.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/WLT-BioBERT-BC5CDR-Disease
0
null
transformers
38,023
Entry not found
ghadeermobasher/WLT-PubMedBERT-BC5CDR-Chemical
20ccac3af28b15a9acd0243f714a51750a05c1f5
2022-06-09T11:50:40.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/WLT-PubMedBERT-BC5CDR-Chemical
0
null
transformers
38,024
Entry not found
huggingtweets/politifact
465e6564658c622441ba47ef693126254b2b0912
2022-06-09T11:14:17.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/politifact
0
null
transformers
38,025
--- language: en thumbnail: http://www.huggingtweets.com/politifact/1654773253130/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1286766140115517441/8rq6ZxZm_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">PolitiFact</div> <div style="text-align: center; font-size: 14px;">@politifact</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from PolitiFact. | Data | PolitiFact | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 680 | | Short tweets | 14 | | Tweets kept | 2556 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1vfo2t7i/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @politifact's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/7h3iptm6) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/7h3iptm6/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/politifact') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
ghadeermobasher/WLT-PubMedBERT-BC4CHEMD
3bbbe691d70756081d00c1b8c8f59a0270568a85
2022-06-09T12:10:30.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/WLT-PubMedBERT-BC4CHEMD
0
null
transformers
38,026
Entry not found
ghadeermobasher/WLT-SciBERT-BC4CHEMD
1a734c621b08475665a5cd418ac2357707d596a9
2022-06-09T19:17:54.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/WLT-SciBERT-BC4CHEMD
0
null
transformers
38,027
Entry not found
huggingtweets/bbclaurakt
19ae31f71e132286ee4ade64f7c6c3d98b24e9c2
2022-06-09T12:48:19.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/bbclaurakt
0
null
transformers
38,028
--- language: en thumbnail: http://www.huggingtweets.com/bbclaurakt/1654778894531/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1533553176619716608/4klYwjkC_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Laura Kuenssberg Translator</div> <div style="text-align: center; font-size: 14px;">@bbclaurakt</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Laura Kuenssberg Translator. | Data | Laura Kuenssberg Translator | | --- | --- | | Tweets downloaded | 2063 | | Retweets | 23 | | Short tweets | 135 | | Tweets kept | 1905 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/37mk0av7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bbclaurakt's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3a8gt7bb) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3a8gt7bb/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/bbclaurakt') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/zaidalyafeai
576675856695bd09a84f74e5a73fe6cd81e5e901
2022-06-09T13:03:12.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/zaidalyafeai
0
null
transformers
38,029
--- language: en thumbnail: http://www.huggingtweets.com/zaidalyafeai/1654779787447/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1521723273922461696/m8_zotM4_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Zaid زيد</div> <div style="text-align: center; font-size: 14px;">@zaidalyafeai</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Zaid زيد. | Data | Zaid زيد | | --- | --- | | Tweets downloaded | 2295 | | Retweets | 74 | | Short tweets | 217 | | Tweets kept | 2004 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/39e5cxbb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @zaidalyafeai's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2uc681wq) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2uc681wq/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/zaidalyafeai') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
ghadeermobasher/WLT-BioBERT-BC2GM
1d5add5ec62d129b5a0a66f32c60725eb8a04321
2022-06-09T14:09:18.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/WLT-BioBERT-BC2GM
0
null
transformers
38,030
Entry not found
ghadeermobasher/WLT-BlueBERT-BC2GM
fe293685908d02c578c48cafb395042055293405
2022-06-09T13:58:03.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/WLT-BlueBERT-BC2GM
0
null
transformers
38,031
Entry not found
ghadeermobasher/WLT-BioBERT-Linnaeus
fd03f9e18fd611464c90b5d4cb4359c055b2e5b2
2022-06-09T14:54:42.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/WLT-BioBERT-Linnaeus
0
null
transformers
38,032
Entry not found
ghadeermobasher/WLT-SciBERT-BC5CDR-Chemical-T
fc7c19863e4d2bb4daa79bdd0b05086e5ae9d09b
2022-06-09T17:47:38.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/WLT-SciBERT-BC5CDR-Chemical-T
0
null
transformers
38,033
Entry not found
huggingtweets/elrichmc
207975ee994ecbc2e30a77adf2a08b68557dac0a
2022-06-09T16:04:04.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/elrichmc
0
null
transformers
38,034
--- language: en thumbnail: http://www.huggingtweets.com/elrichmc/1654790629445/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1484686785812832263/Beh-qGPk_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">ElRichMC</div> <div style="text-align: center; font-size: 14px;">@elrichmc</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from ElRichMC. | Data | ElRichMC | | --- | --- | | Tweets downloaded | 3245 | | Retweets | 203 | | Short tweets | 618 | | Tweets kept | 2424 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1jeok5aq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elrichmc's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/28fmqsme) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/28fmqsme/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/elrichmc') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
ghadeermobasher/WLT-SciBERT-BC5CDR-Chemical-T1
d930d294b60d9ec1b40743ef1d6734f08ffe483d
2022-06-09T17:59:25.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/WLT-SciBERT-BC5CDR-Chemical-T1
0
null
transformers
38,035
Entry not found
huggingtweets/sorcehri
ebb335dde24c0b8c4311e8a3ccc29fb9bf7de66e
2022-06-09T16:22:35.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/sorcehri
0
null
transformers
38,036
--- language: en thumbnail: http://www.huggingtweets.com/sorcehri/1654791699329/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1511431988720414730/A1kqPr25_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">ehri</div> <div style="text-align: center; font-size: 14px;">@sorcehri</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from ehri. | Data | ehri | | --- | --- | | Tweets downloaded | 3233 | | Retweets | 280 | | Short tweets | 837 | | Tweets kept | 2116 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1gn4h8q0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sorcehri's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/7zs978ln) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/7zs978ln/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/sorcehri') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/medscape
0cc741944ec03b048f2a17248fed7e98ae46976c
2022-06-09T16:30:23.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/medscape
0
null
transformers
38,037
--- language: en thumbnail: http://www.huggingtweets.com/medscape/1654792218439/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1401919208133378050/l2MKtnC7_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Medscape</div> <div style="text-align: center; font-size: 14px;">@medscape</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Medscape. | Data | Medscape | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 16 | | Short tweets | 2 | | Tweets kept | 3232 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/mn0jpyr0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @medscape's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3n6qbw51) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3n6qbw51/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/medscape') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
flood/xlm-roberta-base-finetuned-panx-de-fr
e934cc629b181398d56af00c2a024e8fefd584fe
2022-06-22T13:33:30.000Z
[ "pytorch", "xlm-roberta", "token-classification", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
flood
null
flood/xlm-roberta-base-finetuned-panx-de-fr
0
null
transformers
38,038
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1612 - F1: 0.8618 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2874 | 1.0 | 715 | 0.1764 | 0.8343 | | 0.1475 | 2.0 | 1430 | 0.1561 | 0.8508 | | 0.0936 | 3.0 | 2145 | 0.1612 | 0.8618 | ### Framework versions - Transformers 4.19.4 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
flood/xlm-roberta-base-finetuned-panx-fr
43893747d95e996bf1f836dc7bd5cbbe9d2cf0a4
2022-06-22T13:37:27.000Z
[ "pytorch", "xlm-roberta", "token-classification", "dataset:xtreme", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
flood
null
flood/xlm-roberta-base-finetuned-panx-fr
0
null
transformers
38,039
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-fr results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.fr metrics: - name: F1 type: f1 value: 0.8375924680564896 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2794 - F1: 0.8376 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5774 | 1.0 | 191 | 0.3212 | 0.7894 | | 0.2661 | 2.0 | 382 | 0.2737 | 0.8292 | | 0.1756 | 3.0 | 573 | 0.2794 | 0.8376 | ### Framework versions - Transformers 4.19.4 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
flood/xlm-roberta-base-finetuned-panx-it
df837dd00e779105599b769435b9e643a0c7638a
2022-06-22T13:40:36.000Z
[ "pytorch", "xlm-roberta", "token-classification", "dataset:xtreme", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
flood
null
flood/xlm-roberta-base-finetuned-panx-it
0
null
transformers
38,040
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-it results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.it metrics: - name: F1 type: f1 value: 0.8085969180859691 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2527 - F1: 0.8086 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.8319 | 1.0 | 70 | 0.3179 | 0.7474 | | 0.2959 | 2.0 | 140 | 0.2695 | 0.7916 | | 0.2036 | 3.0 | 210 | 0.2527 | 0.8086 | ### Framework versions - Transformers 4.19.4 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
flood/xlm-roberta-base-finetuned-panx-all
0b8e10dcd91438e548c21a4cd09fa417c96ab53d
2022-06-22T13:50:09.000Z
[ "pytorch", "xlm-roberta", "token-classification", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
flood
null
flood/xlm-roberta-base-finetuned-panx-all
0
null
transformers
38,041
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-all results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1739 - F1: 0.8525 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3 | 1.0 | 835 | 0.1894 | 0.8104 | | 0.1564 | 2.0 | 1670 | 0.1751 | 0.8423 | | 0.1032 | 3.0 | 2505 | 0.1739 | 0.8525 | ### Framework versions - Transformers 4.19.4 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
saitishmukhametov/kshn4hobos
257a9c18ad89751a104c4e84af1072fdd158f8dd
2022-06-09T20:56:00.000Z
[ "pytorch" ]
null
false
saitishmukhametov
null
saitishmukhametov/kshn4hobos
0
null
null
38,042
Entry not found
simecek/WormDNADeberta
42f7eb93cc3993f8f53fab613d20f2f96deb8d76
2022-06-09T23:55:17.000Z
[ "pytorch", "tensorboard", "deberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
simecek
null
simecek/WormDNADeberta
0
null
transformers
38,043
Entry not found
lak/poem_project_2
bbf5052f29df936db564975a686e3a0765ed30ed
2022-06-09T20:43:32.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
lak
null
lak/poem_project_2
0
null
transformers
38,044
Entry not found
lak/poem_project_3
4926b15700d7fa4309557fdedbd031f795a60894
2022-06-09T20:45:09.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
lak
null
lak/poem_project_3
0
null
transformers
38,045
Entry not found
nestoralvaro/mt5-base-finetuned-xsum-data_prep_2021_12_26___t1_7.csv___topic_text_google_mt5_base
5a5aad26bb80c272cf56c432a93c04aac8a48ab8
2022-06-10T00:52:35.000Z
[ "pytorch", "tensorboard", "mt5", "text2text-generation", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
text2text-generation
false
nestoralvaro
null
nestoralvaro/mt5-base-finetuned-xsum-data_prep_2021_12_26___t1_7.csv___topic_text_google_mt5_base
0
null
transformers
38,046
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: mt5-base-finetuned-xsum-data_prep_2021_12_26___t1_7.csv___topic_text_google_mt5_base results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-base-finetuned-xsum-data_prep_2021_12_26___t1_7.csv___topic_text_google_mt5_base This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: nan - Rouge1: 2.8146 - Rouge2: 0.6707 - Rougel: 2.8187 - Rougelsum: 2.8098 - Gen Len: 6.4901 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 0.0 | 1.0 | 3869 | nan | 2.8146 | 0.6707 | 2.8187 | 2.8098 | 6.4901 | ### Framework versions - Transformers 4.19.3 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
simecek/ArabidopsisDNADeberta
8d47c3a4af581e378958e34d45dc70d760b840ad
2022-06-10T04:06:21.000Z
[ "pytorch", "tensorboard", "deberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
simecek
null
simecek/ArabidopsisDNADeberta
0
null
transformers
38,047
Entry not found
huggingtweets/artificialbuttr
689403a8d155d0c7c8e267c76a095a212441e7db
2022-06-10T01:39:43.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/artificialbuttr
0
null
transformers
38,048
--- language: en thumbnail: http://www.huggingtweets.com/artificialbuttr/1654825134207/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1485413658351968256/NUVesGCM_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">artificialbutter</div> <div style="text-align: center; font-size: 14px;">@artificialbuttr</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from artificialbutter. | Data | artificialbutter | | --- | --- | | Tweets downloaded | 785 | | Retweets | 129 | | Short tweets | 407 | | Tweets kept | 249 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ypylns0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @artificialbuttr's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1phf128l) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1phf128l/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/artificialbuttr') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/burkevillemama
378f2a3ef15aa6f4d97767a6becb7a544ec3fc22
2022-06-10T02:15:58.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/burkevillemama
0
null
transformers
38,049
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1367879964733804547/buUeka0V_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Bree</div> <div style="text-align: center; font-size: 14px;">@burkevillemama</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Bree. | Data | Bree | | --- | --- | | Tweets downloaded | 2994 | | Retweets | 805 | | Short tweets | 201 | | Tweets kept | 1988 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/82nbekwu/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @burkevillemama's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3gdpxbzc) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3gdpxbzc/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/burkevillemama') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/macarena_olona
63d5214e7e4f913bff6b8d2de275dfdea1f7f481
2022-06-10T06:32:02.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/macarena_olona
0
null
transformers
38,050
--- language: en thumbnail: http://www.huggingtweets.com/macarena_olona/1654842717478/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1535020786007916545/po7DO1ln_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Macarena Olona</div> <div style="text-align: center; font-size: 14px;">@macarena_olona</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Macarena Olona. | Data | Macarena Olona | | --- | --- | | Tweets downloaded | 3245 | | Retweets | 1797 | | Short tweets | 225 | | Tweets kept | 1223 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1yx7hguo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @macarena_olona's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2i64c9y6) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2i64c9y6/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/macarena_olona') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
flood/pegasus-samsum
8cbceb3e593275eb296575d51b0c3e0212a0071f
2022-06-10T07:00:06.000Z
[ "pytorch", "pegasus", "text2text-generation", "dataset:samsum", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
text2text-generation
false
flood
null
flood/pegasus-samsum
0
null
transformers
38,051
--- tags: - generated_from_trainer datasets: - samsum model-index: - name: pegasus-samsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-samsum This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset. It achieves the following results on the evaluation set: - Loss: 1.4814 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.7052 | 0.54 | 500 | 1.4814 | ### Framework versions - Transformers 4.19.3 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
simecek/humandna_MOBILEBERT_1epoch
179ad0361a3aa22862f2da902b7f2d8c374c464f
2022-06-10T07:56:37.000Z
[ "pytorch", "mobilebert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
simecek
null
simecek/humandna_MOBILEBERT_1epoch
0
null
transformers
38,052
Entry not found
huggingtweets/atrioc
b17f9ab25f183b8588c744fd107f9efd2503209a
2022-06-10T09:05:36.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/atrioc
0
null
transformers
38,053
--- language: en thumbnail: http://www.huggingtweets.com/atrioc/1654851931751/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1522249702837657603/1jNZf3aB_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Atrioc</div> <div style="text-align: center; font-size: 14px;">@atrioc</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Atrioc. | Data | Atrioc | | --- | --- | | Tweets downloaded | 3205 | | Retweets | 746 | | Short tweets | 502 | | Tweets kept | 1957 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2zlbp16x/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @atrioc's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3oldn78j) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3oldn78j/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/atrioc') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
tclong/wav2vec2-base-vios-commonvoice-1
c7b785c27655e3dde2ac8d1ac8bae3a4a0a920eb
2022-06-11T03:01:54.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
tclong
null
tclong/wav2vec2-base-vios-commonvoice-1
0
null
transformers
38,054
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-vios-commonvoice-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-vios-commonvoice-1 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8913 - Wer: 0.3621 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.4706 | 0.55 | 500 | 3.4725 | 1.0 | | 3.202 | 1.1 | 1000 | 2.7555 | 1.0008 | | 1.0507 | 1.66 | 1500 | 1.0481 | 0.6196 | | 0.7325 | 2.21 | 2000 | 0.8120 | 0.4958 | | 0.599 | 2.76 | 2500 | 0.7035 | 0.4447 | | 0.5224 | 3.31 | 3000 | 0.6761 | 0.4078 | | 0.4844 | 3.86 | 3500 | 0.6688 | 0.4011 | | 0.4234 | 4.42 | 4000 | 0.6080 | 0.3729 | | 0.4237 | 4.97 | 4500 | 0.5953 | 0.3556 | | 0.3986 | 5.52 | 5000 | 0.6054 | 0.3478 | | 0.3554 | 6.07 | 5500 | 0.6193 | 0.3479 | | 0.3446 | 6.62 | 6000 | 0.5809 | 0.3302 | | 0.3104 | 7.17 | 6500 | 0.5713 | 0.3283 | | 0.3166 | 7.73 | 7000 | 0.5593 | 0.3133 | | 0.2938 | 8.28 | 7500 | 0.5645 | 0.3081 | | 0.3061 | 8.83 | 8000 | 0.5508 | 0.3020 | | 0.2986 | 9.38 | 8500 | 0.5462 | 0.3024 | | 0.2939 | 9.93 | 9000 | 0.5544 | 0.3028 | | 0.2633 | 10.49 | 9500 | 0.5496 | 0.3024 | | 0.2683 | 11.04 | 10000 | 0.5439 | 0.2946 | | 0.2714 | 11.59 | 10500 | 0.5524 | 0.2947 | | 0.2354 | 12.14 | 11000 | 0.5267 | 0.2918 | | 0.2488 | 12.69 | 11500 | 0.5728 | 0.2938 | | 0.2479 | 13.25 | 12000 | 0.5802 | 0.2951 | | 0.245 | 13.8 | 12500 | 0.5571 | 0.2890 | | 0.2422 | 14.35 | 13000 | 0.5531 | 0.2871 | | 0.2369 | 14.9 | 13500 | 0.5453 | 0.2860 | | 0.2345 | 15.45 | 14000 | 0.5452 | 0.2847 | | 0.2507 | 16.0 | 14500 | 0.5536 | 0.2884 | | 0.2454 | 16.56 | 15000 | 0.5577 | 0.2871 | | 0.2729 | 17.11 | 15500 | 0.6019 | 0.2931 | | 0.2743 | 17.66 | 16000 | 0.5619 | 0.2905 | | 0.3031 | 18.21 | 16500 | 0.6401 | 0.3006 | | 0.315 | 18.76 | 17000 | 0.6044 | 0.2990 | | 0.4025 | 19.32 | 17500 | 0.6739 | 0.3304 | | 0.4915 | 19.87 | 18000 | 0.7267 | 0.3472 | | 0.5539 | 20.42 | 18500 | 0.8078 | 0.3483 | | 0.7138 | 20.97 | 19000 | 0.9362 | 0.3765 | | 0.5766 | 21.52 | 19500 | 0.7921 | 0.3392 | | 0.688 | 22.08 | 20000 | 0.8833 | 0.3693 | | 0.6964 | 22.63 | 20500 | 0.9137 | 0.3469 | | 0.7389 | 23.18 | 21000 | 0.9379 | 0.3460 | | 0.7851 | 23.73 | 21500 | 1.0438 | 0.3653 | | 0.7619 | 24.28 | 22000 | 0.9313 | 0.3873 | | 0.7175 | 24.83 | 22500 | 0.8668 | 0.3789 | | 0.6842 | 25.39 | 23000 | 0.8243 | 0.3761 | | 0.6941 | 25.94 | 23500 | 0.8557 | 0.3804 | | 0.7167 | 26.49 | 24000 | 0.8618 | 0.3875 | | 0.721 | 27.04 | 24500 | 0.8686 | 0.3764 | | 0.6949 | 27.59 | 25000 | 0.8773 | 0.3690 | | 0.727 | 28.15 | 25500 | 0.8769 | 0.3666 | | 0.7363 | 28.7 | 26000 | 0.8867 | 0.3634 | | 0.7157 | 29.25 | 26500 | 0.8895 | 0.3626 | | 0.7385 | 29.8 | 27000 | 0.8913 | 0.3621 | ### Framework versions - Transformers 4.19.3 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
lindsayng/t5-base-allwnc-4epoch-bias-3292d5c9
2a8126863d2eee95a64e70fd49373c60ffa800be
2022-06-10T11:28:51.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
lindsayng
null
lindsayng/t5-base-allwnc-4epoch-bias-3292d5c9
0
null
transformers
38,055
Entry not found
simecek/humandna_PERCEIVER_1epoch
6a3340182823f3e0e796c4995f3c9868a46b1903
2022-06-10T14:02:18.000Z
[ "pytorch", "perceiver", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
simecek
null
simecek/humandna_PERCEIVER_1epoch
0
null
transformers
38,056
Entry not found
vaibhavagg303/Bart-Fine-Tuned
cc03cc75506e52f3590c24c001e81a30456ab2cf
2022-06-10T18:19:20.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
vaibhavagg303
null
vaibhavagg303/Bart-Fine-Tuned
0
null
transformers
38,057
Entry not found
huggingtweets/malzliebchen
cdd46a0ec305d18b1d075180c7e50d994d456211
2022-06-10T18:29:39.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/malzliebchen
0
null
transformers
38,058
--- language: en thumbnail: http://www.huggingtweets.com/malzliebchen/1654885748305/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1521909233024913408/4QsF2YzM_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Malzbeard's Severed Head</div> <div style="text-align: center; font-size: 14px;">@malzliebchen</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Malzbeard's Severed Head. | Data | Malzbeard's Severed Head | | --- | --- | | Tweets downloaded | 3247 | | Retweets | 41 | | Short tweets | 486 | | Tweets kept | 2720 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/e1wzn1e5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @malzliebchen's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/38g20s6n) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/38g20s6n/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/malzliebchen') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/smallmutuals
2732407c6fd0b845d8859d494f89dcfcfca3784e
2022-06-10T19:13:07.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/smallmutuals
0
null
transformers
38,059
--- language: en thumbnail: http://www.huggingtweets.com/smallmutuals/1654888348503/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1433527116948180999/wejtDhFm_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Cool Owl Guy</div> <div style="text-align: center; font-size: 14px;">@smallmutuals</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Cool Owl Guy. | Data | Cool Owl Guy | | --- | --- | | Tweets downloaded | 367 | | Retweets | 45 | | Short tweets | 25 | | Tweets kept | 297 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/238iiiu5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @smallmutuals's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2hl8vi9y) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2hl8vi9y/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/smallmutuals') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/jana_aych_ess
87e7c8004f1d69c6756a0e7fc380ca9c7f9e9c38
2022-06-10T19:22:06.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/jana_aych_ess
0
null
transformers
38,060
--- language: en thumbnail: http://www.huggingtweets.com/jana_aych_ess/1654888920998/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1169751139409117185/BU60y7P5_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Jana 'All Cops Are Bastards' H-S (they/them)</div> <div style="text-align: center; font-size: 14px;">@jana_aych_ess</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Jana 'All Cops Are Bastards' H-S (they/them). | Data | Jana 'All Cops Are Bastards' H-S (they/them) | | --- | --- | | Tweets downloaded | 3234 | | Retweets | 343 | | Short tweets | 148 | | Tweets kept | 2743 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3q5i1d01/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jana_aych_ess's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3uy7dmw6) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3uy7dmw6/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/jana_aych_ess') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
vaibhavagg303/Bart-Fine-Tuned2
fbf7f90ab7ac306a9762e9e9d1b419fbc4bb3e52
2022-06-11T01:15:21.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
vaibhavagg303
null
vaibhavagg303/Bart-Fine-Tuned2
0
null
transformers
38,061
Entry not found
huggingtweets/boopysaur
88ee800403b3f1b196ec7ba7d405af9fe9662f04
2022-06-10T22:57:09.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/boopysaur
0
null
transformers
38,062
--- language: en thumbnail: http://www.huggingtweets.com/boopysaur/1654901824865/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1476816918879297559/2jt_Rt2L_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">boop ♡</div> <div style="text-align: center; font-size: 14px;">@boopysaur</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from boop ♡. | Data | boop ♡ | | --- | --- | | Tweets downloaded | 920 | | Retweets | 162 | | Short tweets | 128 | | Tweets kept | 630 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/398l195g/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @boopysaur's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3te0suw6) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3te0suw6/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/boopysaur') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/jedwill1999
ca8dd632a646f8996f77745cefe04365fc542cbd
2022-06-10T23:10:10.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/jedwill1999
0
null
transformers
38,063
--- language: en thumbnail: http://www.huggingtweets.com/jedwill1999/1654902604867/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1510152678919135250/lfEmlEGJ_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">a local</div> <div style="text-align: center; font-size: 14px;">@jedwill1999</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from a local. | Data | a local | | --- | --- | | Tweets downloaded | 3246 | | Retweets | 1080 | | Short tweets | 525 | | Tweets kept | 1641 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1qsnsp6t/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jedwill1999's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/mjjc73pu) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/mjjc73pu/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/jedwill1999') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/theanything_bot
ecc080d2b67010ee5bdb30e39c186790b95114bd
2022-06-10T23:19:47.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/theanything_bot
0
null
transformers
38,064
--- language: en thumbnail: http://www.huggingtweets.com/theanything_bot/1654903166604/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1532874424776437760/vSP1qWyF_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Anything Bot</div> <div style="text-align: center; font-size: 14px;">@theanything_bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Anything Bot. | Data | Anything Bot | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 0 | | Short tweets | 0 | | Tweets kept | 3250 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/oy5g644b/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @theanything_bot's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2rui0vn2) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2rui0vn2/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/theanything_bot') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
melihelis/udesa-model-aah-es-28k
a1778f1576ef2a2d0d4e056a4769e82dfd14f57d
2022-06-10T23:44:23.000Z
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
false
melihelis
null
melihelis/udesa-model-aah-es-28k
0
null
transformers
38,065
Entry not found
nateraw/modelcard-creator-demo
e37dbce515203df944b4198c4e36791c8bb1d6af
2022-06-10T23:58:39.000Z
[ "en", "dataset:beans", "arxiv:1810.03993", "arxiv:1910.09700", "pytorch", "modelcards", "autogenerated-modelcard", "license:mit" ]
null
false
nateraw
null
nateraw/modelcard-creator-demo
0
null
pytorch
38,066
--- language: - en license: mit library_name: pytorch tags: - modelcards - autogenerated-modelcard datasets: - beans metrics: - accuracy --- # modelcard-creator-demo ## Table of Contents - [Model Details](#model-details) - [How To Get Started With the Model](#how-to-get-started-with-the-model) - [Uses](#uses) - [Direct Use](#direct-use) - [Downstream Use](#downstream-use) - [Misuse and Out of Scope Use](#misuse-and-out-of-scope-use) - [Limitations and Biases](#limitations-and-biases) - [Training](#training) - [Training Data](#training-data) - [Training Procedure](#training-procedure) - [Evaluation Results](#evaluation-results) - [Environmental Impact](#environmental-impact) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Model Details <!-- Give an overview of your model, the relevant research paper, who trained it, etc. --> This isn't really a model, it's just a test repo to see if the [model card creator](https://huggingface.co/spaces/nateraw/modelcard-creator) works! - Developed by: Nathan Raw - Language(s): - License: modelcard-creator-demo is licensed under the mit license - Resources for more information: - [Research Paper](https://arxiv.org/pdf/1810.03993.pdf) - [GitHub Repo](https://github.com/nateraw/modelcards) ## How to Get Started with the Model Use the code below to get started with the model. ```python # A nice code snippet here that describes how to use the model... ``` ## Uses #### Direct Use <!-- Describe what kind of tasks this model can be used for directly or problems it can solve. --> [More Information Needed] #### Downstream Use <!-- Describe how this model could be leveraged by a downstream model (if applicable) --> [More Information Needed] #### Misuse and Out-of-scope Use <!-- Describe ways in which this model ***should not*** be used. --> [More Information Needed] ## Limitations and Biases <!-- Describe limitations and biases of this model or models of it's type. --> **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.** [More Information Needed] ## Training #### Training Data <!-- Describe the dataset used to train this model. --> <!-- Refer to data card if dataset is provided and exists on the hub --> See the data card for additional information. #### Training Procedure <!-- Describe the preprocessing, hardware used, training hyperparameters, etc. --> [More Information Needed] ## Evaluation Results <!-- Describe evaluation results of this model across any datasets it was evaluated on. --> [More Information Needed] ## Environmental Impact <!-- Provide information to document the environmental impact of this model --> You can estimate carbon emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700) - **Hardware Type:** - **Hours used:** - **Cloud Provider:** - **Compute Region:** - **Carbon Emitted:** ## Citation Information ```bibtex @inproceedings{Mitchell_2019, doi = {10.1145/3287560.3287596}, url = {https://doi.org/10.1145%2F3287560.3287596}, year = 2019, month = {jan}, publisher = {{ACM} }, author = {Margaret Mitchell and Simone Wu and Andrew Zaldivar and Parker Barnes and Lucy Vasserman and Ben Hutchinson and Elena Spitzer and Inioluwa Deborah Raji and Timnit Gebru}, title = {Model Cards for Model Reporting}, booktitle = {Proceedings of the Conference on Fairness, Accountability, and Transparency} } ```
huggingtweets/waffle_64
22d93b5a1d7d01e6ba8f7dafaa804f5570983a3c
2022-06-11T04:39:14.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/waffle_64
0
null
transformers
38,067
--- language: en thumbnail: http://www.huggingtweets.com/waffle_64/1654922313776/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1534033778787639296/a9JUby19_400x400.png&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">🧇 Werewaffle🐺LOU NATION🐺</div> <div style="text-align: center; font-size: 14px;">@waffle_64</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from 🧇 Werewaffle🐺LOU NATION🐺. | Data | 🧇 Werewaffle🐺LOU NATION🐺 | | --- | --- | | Tweets downloaded | 3249 | | Retweets | 110 | | Short tweets | 217 | | Tweets kept | 2922 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1rq6yndm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @waffle_64's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ucwnzfby) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ucwnzfby/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/waffle_64') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/gustholomulers
4e35eff4557d0851d8b8fc92aca55c6a6913f61d
2022-06-11T07:53:54.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/gustholomulers
0
null
transformers
38,068
--- language: en thumbnail: http://www.huggingtweets.com/gustholomulers/1654934015981/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1535477036353040384/tXI_s1Yi_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">soppy</div> <div style="text-align: center; font-size: 14px;">@gustholomulers</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from soppy. | Data | soppy | | --- | --- | | Tweets downloaded | 1482 | | Retweets | 55 | | Short tweets | 329 | | Tweets kept | 1098 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1nhfbopf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gustholomulers's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3p5yu4wm) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3p5yu4wm/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/gustholomulers') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
ryo0634/bert-base-en-0
adde08287122d6435fcda18199770d3597606d7b
2022-06-11T13:34:56.000Z
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
ryo0634
null
ryo0634/bert-base-en-0
0
null
transformers
38,069
Entry not found
huggingtweets/nosuba_13
89853d55310725ec8a2633ce223a277847658465
2022-06-11T13:40:57.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/nosuba_13
0
1
transformers
38,070
--- language: en thumbnail: http://www.huggingtweets.com/nosuba_13/1654954852706/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1382014203796553732/DFDiOrcz_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Noel</div> <div style="text-align: center; font-size: 14px;">@nosuba_13</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Noel. | Data | Noel | | --- | --- | | Tweets downloaded | 3170 | | Retweets | 859 | | Short tweets | 369 | | Tweets kept | 1942 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ui1lp214/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nosuba_13's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/6sn9tlrz) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/6sn9tlrz/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/nosuba_13') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
lindsayng/t5-base-fullwnc-epoch-4e91e125
0fb829932cd50e6d95bf40c189978b1ed1feab17
2022-06-11T13:54:35.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
lindsayng
null
lindsayng/t5-base-fullwnc-epoch-4e91e125
0
1
transformers
38,071
Entry not found
finiteautomata/pepe-5k_diff
9a402e359c253f0d944a9f55b614dd2afd1d4e6a
2022-06-11T18:32:45.000Z
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
false
finiteautomata
null
finiteautomata/pepe-5k_diff
0
null
transformers
38,072
Entry not found
qqqqqqqb/bert-finetuned-medlog
147035f78d64cd33a8e04357e4a9ee5da2ea0594
2022-06-14T08:33:53.000Z
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
qqqqqqqb
null
qqqqqqqb/bert-finetuned-medlog
0
null
transformers
38,073
Entry not found
zoha/wav2vec2-base-librispeech100h-google-colab
6516054bea883ba8b0c5e6b6497f159e0b5acb83
2022-06-13T13:39:58.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
zoha
null
zoha/wav2vec2-base-librispeech100h-google-colab
0
null
transformers
38,074
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-librispeech100h-google-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-librispeech100h-google-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1156 - Wer: 0.0756 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.6033 | 0.9 | 1600 | 0.4802 | 0.2728 | | 0.1912 | 1.79 | 3200 | 0.1601 | 0.1140 | | 0.1409 | 2.69 | 4800 | 0.1423 | 0.0932 | | 0.108 | 3.59 | 6400 | 0.1260 | 0.0806 | | 0.1045 | 4.48 | 8000 | 0.1156 | 0.0756 | ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.2.3.dev0 - Tokenizers 0.12.1
sactisudesa/bert_sp
537eab5ef3cf3a1a1e0cb43d56727c0ec2fc91ce
2022-06-11T18:35:10.000Z
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
false
sactisudesa
null
sactisudesa/bert_sp
0
1
transformers
38,075
Entry not found
florver/modelo_NLI_kvd_vf_5000
705f321a9b0d8a6c5c44555093d7801c97c69450
2022-06-11T19:01:08.000Z
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
false
florver
null
florver/modelo_NLI_kvd_vf_5000
0
null
transformers
38,076
Entry not found
meghazisofiane/opus-mt-en-ar-evaluated-en-to-ar-1000instancesopus-leaningRate2e-05-batchSize8-11epoch-3
4d31ebc6ef97f870cca6d496b9715a9ef3179c3f
2022-06-11T19:25:04.000Z
[ "pytorch", "tensorboard", "marian", "text2text-generation", "dataset:opus100", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
text2text-generation
false
meghazisofiane
null
meghazisofiane/opus-mt-en-ar-evaluated-en-to-ar-1000instancesopus-leaningRate2e-05-batchSize8-11epoch-3
0
null
transformers
38,077
--- license: apache-2.0 tags: - generated_from_trainer datasets: - opus100 metrics: - bleu model-index: - name: opus-mt-en-ar-evaluated-en-to-ar-1000instancesopus-leaningRate2e-05-batchSize8-11epoch-3 results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: opus100 type: opus100 args: ar-en metrics: - name: Bleu type: bleu value: 21.3028 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-en-ar-evaluated-en-to-ar-1000instancesopus-leaningRate2e-05-batchSize8-11epoch-3 This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ar](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) on the opus100 dataset. It achieves the following results on the evaluation set: - Loss: 0.1421 - Bleu: 21.3028 - Meteor: 0.1285 - Gen Len: 9.975 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 11 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Meteor | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:| | 1.0508 | 1.0 | 100 | 0.1413 | 27.9009 | 0.1416 | 8.85 | | 0.1253 | 2.0 | 200 | 0.1372 | 23.11 | 0.1345 | 9.855 | | 0.1017 | 3.0 | 300 | 0.1390 | 21.7885 | 0.1364 | 9.97 | | 0.0868 | 4.0 | 400 | 0.1378 | 21.3889 | 0.1314 | 9.835 | | 0.0754 | 5.0 | 500 | 0.1398 | 22.198 | 0.132 | 9.675 | | 0.0667 | 6.0 | 600 | 0.1396 | 20.8645 | 0.1308 | 10.055 | | 0.0604 | 7.0 | 700 | 0.1408 | 20.289 | 0.1303 | 10.53 | | 0.0553 | 8.0 | 800 | 0.1414 | 21.7023 | 0.1293 | 10.005 | | 0.0518 | 9.0 | 900 | 0.1421 | 21.3028 | 0.1285 | 9.975 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
sactisudesa/bertin_sp
5a41e566deb90c4ef46e927fba6d739d8fba4080
2022-06-12T15:15:01.000Z
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
false
sactisudesa
null
sactisudesa/bertin_sp
0
null
transformers
38,078
Entry not found
florver/modelo_NLI_kvd_vf_8000
2cccf1db8436db52519ace45e4ac1658dd940a0c
2022-06-11T20:37:33.000Z
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
false
florver
null
florver/modelo_NLI_kvd_vf_8000
0
null
transformers
38,079
Entry not found
florver/modelo_NLI_kvd_2_5000
f45c1ecb6db3bfdfb9a5368d3987e8480243b5cf
2022-06-11T21:39:43.000Z
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
false
florver
null
florver/modelo_NLI_kvd_2_5000
0
null
transformers
38,080
Entry not found
huggingtweets/tayplaysgaymes
e9bb8ae02287ca62a40ee2c08b53b0b9bdd53acf
2022-06-12T03:56:41.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/tayplaysgaymes
0
null
transformers
38,081
--- language: en thumbnail: http://www.huggingtweets.com/tayplaysgaymes/1655006196516/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1144053838459969536/lv3yBmoX_400x400.png&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Tay</div> <div style="text-align: center; font-size: 14px;">@tayplaysgaymes</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Tay. | Data | Tay | | --- | --- | | Tweets downloaded | 3212 | | Retweets | 693 | | Short tweets | 367 | | Tweets kept | 2152 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1hmextiq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tayplaysgaymes's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3r0cse8x) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3r0cse8x/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/tayplaysgaymes') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/bosstjanz
d7498ef06699019d58d7d65560f324355a2000f6
2022-06-12T09:27:34.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/bosstjanz
0
null
transformers
38,082
--- language: en thumbnail: http://www.huggingtweets.com/bosstjanz/1655026050127/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1342130927737176064/SiNG_CxQ_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Zrimškow</div> <div style="text-align: center; font-size: 14px;">@bosstjanz</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Zrimškow. | Data | Zrimškow | | --- | --- | | Tweets downloaded | 3225 | | Retweets | 368 | | Short tweets | 279 | | Tweets kept | 2578 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/23nemiqj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bosstjanz's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2pjrymzt) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2pjrymzt/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/bosstjanz') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
nestoralvaro/mt5-base-finetuned-xsum-RAW_data_prep_2021_12_26___t55_403.csv__google_mt5_base
df5bcb6852bb1cd7e9bfcdda3320392e4de8bc9c
2022-06-12T12:25:16.000Z
[ "pytorch", "tensorboard", "mt5", "text2text-generation", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
text2text-generation
false
nestoralvaro
null
nestoralvaro/mt5-base-finetuned-xsum-RAW_data_prep_2021_12_26___t55_403.csv__google_mt5_base
0
null
transformers
38,083
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: mt5-base-finetuned-xsum-RAW_data_prep_2021_12_26___t55_403.csv__google_mt5_base results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-base-finetuned-xsum-RAW_data_prep_2021_12_26___t55_403.csv__google_mt5_base This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: nan - Rouge1: 0.9712 - Rouge2: 0.1329 - Rougel: 0.9638 - Rougelsum: 0.9675 - Gen Len: 6.4489 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 0.0 | 1.0 | 36479 | nan | 0.9712 | 0.1329 | 0.9638 | 0.9675 | 6.4489 | ### Framework versions - Transformers 4.19.4 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
huggingtweets/manfightdragon
3075e0fb053f4f38b9339ca64eafaa4b91d7f26e
2022-06-12T10:26:35.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/manfightdragon
0
null
transformers
38,084
--- language: en thumbnail: http://www.huggingtweets.com/manfightdragon/1655029573001/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1184073162520031232/V6DOEeLp_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Lance McDonald</div> <div style="text-align: center; font-size: 14px;">@manfightdragon</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Lance McDonald. | Data | Lance McDonald | | --- | --- | | Tweets downloaded | 3249 | | Retweets | 209 | | Short tweets | 214 | | Tweets kept | 2826 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3pc794z5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @manfightdragon's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2t8940p5) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2t8940p5/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/manfightdragon') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/eitapau
4743a65503ddd2f025475762ffd4fde7c00f01ce
2022-06-12T12:49:59.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/eitapau
0
null
transformers
38,085
--- language: en thumbnail: http://www.huggingtweets.com/eitapau/1655038194341/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1466644335034839043/woyxmPjG_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Eeeeita pau!</div> <div style="text-align: center; font-size: 14px;">@eitapau</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Eeeeita pau!. | Data | Eeeeita pau! | | --- | --- | | Tweets downloaded | 2460 | | Retweets | 322 | | Short tweets | 274 | | Tweets kept | 1864 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3gcai042/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @eitapau's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1r1v0rkr) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1r1v0rkr/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/eitapau') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
CVPR/DualStyleGAN
bd1a2373dadf0c45a5278da1f5b1176d5b7d9129
2022-06-12T15:52:23.000Z
[ "dataset:cartoon", "dataset:caricature", "dataset:anime", "dataset:pixar", "dataset:slamdunk", "dataset:arcane", "dataset:comic", "arxiv:2203.13248", "pytorch", "style-transfer", "face-stylization", "license:mit" ]
null
false
CVPR
null
CVPR/DualStyleGAN
0
5
pytorch
38,086
--- license: mit library_name: pytorch tags: - style-transfer - face-stylization datasets: - cartoon - caricature - anime - pixar - slamdunk - arcane - comic --- ## Model Details This system provides a web demo for the following paper: **Pastiche Master: Exemplar-Based High-Resolution Portrait Style Transfer (CVPR 2022)** - Algorithm developed by: Shuai Yang, Liming Jiang, Ziwei Liu and Chen Change Loy - Web demo developed by: [hysts](https://huggingface.co/hysts) - Resources for more information: - [Project Page](https://www.mmlab-ntu.com/project/dualstylegan/) - [Research Paper](https://arxiv.org/abs/2203.13248) - [GitHub Repo](https://github.com/williamyang1991/DualStyleGAN) **Abstract** > Recent studies on StyleGAN show high performance on artistic portrait generation by transfer learning with limited data. In this paper, we explore more challenging exemplar-based high-resolution portrait style transfer by introducing a novel DualStyleGAN with flexible control of dual styles of the original face domain and the extended artistic portrait domain. Different from StyleGAN, DualStyleGAN provides a natural way of style transfer by characterizing the content and style of a portrait with an intrinsic style path and a new extrinsic style path, respectively. The delicately designed extrinsic style path enables our model to modulate both the color and complex structural styles hierarchically to precisely pastiche the style example. Furthermore, a novel progressive fine-tuning scheme is introduced to smoothly transform the generative space of the model to the target domain, even with the above modifications on the network architecture. Experiments demonstrate the superiority of DualStyleGAN over state-of-the-art methods in high-quality portrait style transfer and flexible style control. ## Citation Information ```bibtex @inproceedings{yang2022Pastiche,  author = {Yang, Shuai and Jiang, Liming and Liu, Ziwei and and Loy, Chen Change},  title = {Pastiche Master: Exemplar-Based High-Resolution Portrait Style Transfer},  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},  year = {2022} } ```
lindsayng/t5-base-fullwnc-5epoch2-2dc8dc72
ddb53f61cd5f0a17e0cdf46668cfe01cb2a3d4c8
2022-06-12T14:25:58.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
lindsayng
null
lindsayng/t5-base-fullwnc-5epoch2-2dc8dc72
0
null
transformers
38,087
Entry not found
florver/modelo_NLI_kvd_vf_5000_v2
7e9c0e233b8a8024f7b98cc7e2d827fb6d88bd06
2022-06-12T15:21:59.000Z
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
false
florver
null
florver/modelo_NLI_kvd_vf_5000_v2
0
1
transformers
38,088
Entry not found
roshnir/xlmr-finetuned-mlqa-dev-cross_hi-en
02834138dfce3d8a71ce57b307f4dc7364a9b78f
2022-06-12T15:54:06.000Z
[ "pytorch", "xlm-roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
false
roshnir
null
roshnir/xlmr-finetuned-mlqa-dev-cross_hi-en
0
null
transformers
38,089
Entry not found
kijun/mas-kobart-v2
bbeb6f9e9600c361c0c44a0568ad80eed6f0afab
2022-06-12T15:48:22.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "license:mit", "autotrain_compatible" ]
text2text-generation
false
kijun
null
kijun/mas-kobart-v2
0
null
transformers
38,090
--- license: mit ---
simecek/humandna_ALBERT_1epoch
169c494fe99eea53a3ea0918945c790276ca8e84
2022-06-12T16:07:57.000Z
[ "pytorch", "albert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
simecek
null
simecek/humandna_ALBERT_1epoch
0
null
transformers
38,091
Entry not found
jvanz/lenerbr-autoencoder
9ed4d0d603c10d37958ce9590f3807dfab6b9673
2022-06-12T20:58:42.000Z
[ "pytorch", "encoder-decoder", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
jvanz
null
jvanz/lenerbr-autoencoder
0
null
transformers
38,092
Entry not found
meghazisofiane/mbart-large-cc25-en-ar-evaluated-en-to-ar-2000instances-un_multi-leaningRate2e-05-batchSize2
218e3f3c2828a4fbd9fcb1329cca2d09c9cf281d
2022-06-12T21:21:42.000Z
[ "pytorch", "tensorboard", "mbart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
meghazisofiane
null
meghazisofiane/mbart-large-cc25-en-ar-evaluated-en-to-ar-2000instances-un_multi-leaningRate2e-05-batchSize2
0
null
transformers
38,093
Entry not found
huggingtweets/pandershirts
670f597bcabc2290cb7a040b297572636a8ac2d4
2022-06-12T20:14:03.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/pandershirts
0
null
transformers
38,094
--- language: en thumbnail: http://www.huggingtweets.com/pandershirts/1655064824816/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1535688698993512449/903NKFWz_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Hellvetika</div> <div style="text-align: center; font-size: 14px;">@pandershirts</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Hellvetika. | Data | Hellvetika | | --- | --- | | Tweets downloaded | 3246 | | Retweets | 119 | | Short tweets | 360 | | Tweets kept | 2767 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/kyjr0nr8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @pandershirts's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/k8rb7z0d) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/k8rb7z0d/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/pandershirts') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
simecek/humandna_BERT_1epoch
bfaebcf93fd768f167b7d53507454fa31576464d
2022-06-12T21:15:00.000Z
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
simecek
null
simecek/humandna_BERT_1epoch
0
null
transformers
38,095
Entry not found
sactisudesa/RobertaBNE
fc2a7af926202badf030243dc8a2e73443511578
2022-06-12T23:42:04.000Z
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
false
sactisudesa
null
sactisudesa/RobertaBNE
0
1
transformers
38,096
Entry not found
simecek/humandna_DISTILBERT_1epoch
fb241c6ae7cec57f94d885e416925c04b2c1e5f6
2022-06-12T23:50:24.000Z
[ "pytorch", "distilbert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
simecek
null
simecek/humandna_DISTILBERT_1epoch
0
null
transformers
38,097
Entry not found
huggingtweets/liebdog1224
b8f2218c242fdf5d7baaaa40ba576084ed354099
2022-06-13T01:03:02.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/liebdog1224
0
null
transformers
38,098
--- language: en thumbnail: http://www.huggingtweets.com/liebdog1224/1655082177490/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1088468936998432769/YexExPjG_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Noah Liebers</div> <div style="text-align: center; font-size: 14px;">@liebdog1224</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Noah Liebers. | Data | Noah Liebers | | --- | --- | | Tweets downloaded | 362 | | Retweets | 210 | | Short tweets | 30 | | Tweets kept | 122 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/36hyn6h4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @liebdog1224's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2zpm376e) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2zpm376e/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/liebdog1224') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
ryo0634/bert-base-log_linear-0
5bbf403d7201bf0afab3b7213499a7134d60b686
2022-06-13T03:36:50.000Z
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
ryo0634
null
ryo0634/bert-base-log_linear-0
0
null
transformers
38,099
Entry not found