Transformers
English
Inference Endpoints
s4sarath commited on
Commit
88b078f
1 Parent(s): ada42d8

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +67 -0
README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language: en
4
+ ---
5
+
6
+ # BART (large-sized model)
7
+
8
+ BART model pre-trained on English language. It was introduced in the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Lewis et al. and first released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/bart).
9
+
10
+ Disclaimer: The team releasing BART did not write a model card for this model so this model card has been written by the Hugging Face team.
11
+
12
+ ## Model description
13
+
14
+ BART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.
15
+
16
+ BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering).
17
+
18
+ ## Intended uses & limitations
19
+
20
+ You can use the raw model for text infilling. However, the model is mostly meant to be fine-tuned on a supervised dataset. See the [model hub](https://huggingface.co/models?search=bart) to look for fine-tuned versions on a task that interests you.
21
+
22
+ ### How to use
23
+
24
+ Here is how to use this model in tf_transformers:
25
+
26
+ ```python
27
+ from tf_transformers.models import BartModel
28
+ from transformers import BartTokenizer
29
+
30
+ tokenizer = BartTokenizer.from_pretrained('facebook/bart-large')
31
+ model = BartModel.from_pretrained('facebook/bart-large')
32
+
33
+ inputs_tf = {}
34
+ inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
35
+
36
+ inputs_tf["encoder_input_ids"] = inputs["input_ids"]
37
+ inputs_tf["encoder_input_mask"] = inputs["attention_mask"]
38
+ inputs_tf["decoder_input_ids"] = decoder_input_ids
39
+ outputs_tf = model(inputs_tf)
40
+
41
+ ```
42
+
43
+ ### BibTeX entry and citation info
44
+
45
+ ```bibtex
46
+ @article{DBLP:journals/corr/abs-1910-13461,
47
+ author = {Mike Lewis and
48
+ Yinhan Liu and
49
+ Naman Goyal and
50
+ Marjan Ghazvininejad and
51
+ Abdelrahman Mohamed and
52
+ Omer Levy and
53
+ Veselin Stoyanov and
54
+ Luke Zettlemoyer},
55
+ title = {{BART:} Denoising Sequence-to-Sequence Pre-training for Natural Language
56
+ Generation, Translation, and Comprehension},
57
+ journal = {CoRR},
58
+ volume = {abs/1910.13461},
59
+ year = {2019},
60
+ url = {http://arxiv.org/abs/1910.13461},
61
+ eprinttype = {arXiv},
62
+ eprint = {1910.13461},
63
+ timestamp = {Thu, 31 Oct 2019 14:02:26 +0100},
64
+ biburl = {https://dblp.org/rec/journals/corr/abs-1910-13461.bib},
65
+ bibsource = {dblp computer science bibliography, https://dblp.org}
66
+ }
67
+ ```