boris commited on
Commit
aaff1e6
1 Parent(s): 6686d43

New model from https://wandb.ai/wandb/huggingtweets/runs/395id7uq

Browse files
Files changed (5) hide show
  1. README.md +7 -7
  2. config.json +1 -1
  3. pytorch_model.bin +2 -2
  4. tokenizer_config.json +1 -1
  5. training_args.bin +2 -2
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  language: en
3
- thumbnail: https://www.huggingtweets.com/ai_hexcrawl/1623749267111/predictions.png
4
  tags:
5
  - huggingtweets
6
  widget:
@@ -42,20 +42,20 @@ The model was trained on tweets from AI Hexcrawl.
42
 
43
  | Data | AI Hexcrawl |
44
  | --- | --- |
45
- | Tweets downloaded | 243 |
46
- | Retweets | 8 |
47
  | Short tweets | 0 |
48
- | Tweets kept | 235 |
49
 
50
- [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2mm8gyd4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
51
 
52
  ## Training procedure
53
 
54
  The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ai_hexcrawl's tweets.
55
 
56
- Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1b14sdo3) for full transparency and reproducibility.
57
 
58
- At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1b14sdo3/artifacts) is logged and versioned.
59
 
60
  ## How to use
61
 
 
1
  ---
2
  language: en
3
+ thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
4
  tags:
5
  - huggingtweets
6
  widget:
 
42
 
43
  | Data | AI Hexcrawl |
44
  | --- | --- |
45
+ | Tweets downloaded | 384 |
46
+ | Retweets | 11 |
47
  | Short tweets | 0 |
48
+ | Tweets kept | 373 |
49
 
50
+ [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2g5e7kn3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
51
 
52
  ## Training procedure
53
 
54
  The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ai_hexcrawl's tweets.
55
 
56
+ Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/395id7uq) for full transparency and reproducibility.
57
 
58
+ At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/395id7uq/artifacts) is logged and versioned.
59
 
60
  ## How to use
61
 
config.json CHANGED
@@ -35,7 +35,7 @@
35
  "top_p": 0.95
36
  }
37
  },
38
- "transformers_version": "4.6.1",
39
  "use_cache": true,
40
  "vocab_size": 50257
41
  }
 
35
  "top_p": 0.95
36
  }
37
  },
38
+ "transformers_version": "4.8.2",
39
  "use_cache": true,
40
  "vocab_size": 50257
41
  }
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:92540578fbaa48521b522f710dfe4ddc7a3009748dd0c7f4c6918728c9d65fda
3
- size 510408315
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6075da1dbf503a5f56b1bf72716503049285d599ba39d4829161ccf0954ab06a
3
+ size 510403817
tokenizer_config.json CHANGED
@@ -1 +1 @@
1
- {"unk_token": "<|endoftext|>", "bos_token": "<|endoftext|>", "eos_token": "<|endoftext|>", "add_prefix_space": false, "model_max_length": 1024, "special_tokens_map_file": null, "name_or_path": "gpt2"}
 
1
+ {"unk_token": "<|endoftext|>", "bos_token": "<|endoftext|>", "eos_token": "<|endoftext|>", "add_prefix_space": false, "model_max_length": 1024, "special_tokens_map_file": null, "name_or_path": "gpt2", "tokenizer_class": "GPT2Tokenizer"}
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b4558f51036f480e256427281407559fc4b0eb43c7212023be4fdc8ccb2c3e33
3
- size 2415
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:42e3950b8479152b331465dc5e2217232f46ec73cc8ef9a9b57340bedac3a648
3
+ size 2671