File size: 1,245 Bytes
ef4bd0c
 
ff0f98e
ef4bd0c
 
 
 
0c209d5
ef4bd0c
 
 
0c209d5
 
 
 
ef4bd0c
 
 
 
 
 
 
0c209d5
ef4bd0c
 
 
ff0f98e
ef4bd0c
 
 
 
 
 
391d985
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
---
library_name: transformers
license: apache-2.0
base_model: gpt2
tags:
- llama-factory
- full
- diffusion
model-index:
- name: diffugpt-s
  results: []
datasets:
- HuggingFaceFW/fineweb
language:
- en
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# diffugpt-s

This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on Fineweb dataset.

## Model description

Details and model loading can be seen [https://github.com/HKUNLP/DiffuLLaMA](https://github.com/HKUNLP/DiffuLLaMA).

### Framework versions

- Transformers 4.44.2
- Pytorch 2.1.1+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1

```
@misc{gong2024scalingdiffusionlanguagemodels,
      title={Scaling Diffusion Language Models via Adaptation from Autoregressive Models}, 
      author={Shansan Gong and Shivam Agarwal and Yizhe Zhang and Jiacheng Ye and Lin Zheng and Mukai Li and Chenxin An and Peilin Zhao and Wei Bi and Jiawei Han and Hao Peng and Lingpeng Kong},
      year={2024},
      eprint={2410.17891},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2410.17891}, 
}
```