File size: 6,687 Bytes
c6b7661
 
 
8727c9c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
---
library_name: keras-hub
---
### Model Overview
BART encoder-decoder network.

This class implements a Transformer-based encoder-decoder model as
described in
["BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension"](https://arxiv.org/abs/1910.13461).

The default constructor gives a fully customizable, randomly initialized BART
model with any number of layers, heads, and embedding dimensions. To load
preset architectures and weights, use the `from_preset` constructor.

Disclaimer: Pre-trained models are provided on an "as is" basis, without
warranties or conditions of any kind. The underlying model is provided by a
third party and subject to a separate license, available
[here](https://github.com/facebookresearch/fairseq/).


__Arguments__


- __vocabulary_size__: int. The size of the token vocabulary.
- __num_layers__: int. The number of transformer encoder layers and
    transformer decoder layers.
- __num_heads__: int. The number of attention heads for each transformer.
    The hidden size must be divisible by the number of attention heads.
- __hidden_dim__: int. The size of the transformer encoding and pooler layers.
- __intermediate_dim__: int. The output dimension of the first Dense layer in
    a two-layer feedforward network for each transformer.
- __dropout__: float. Dropout probability for the Transformer encoder.
- __max_sequence_length__: int. The maximum sequence length that this encoder
    can consume. If None, `max_sequence_length` uses the value from
    sequence length. This determines the variable shape for positional
    embeddings.

### Example Usage
```python
import keras
import keras_hub
import numpy as np
```

Use `generate()` to do text generation, given an input context.
```python
bart_lm = keras_hub.models.BartSeq2SeqLM.from_preset("bart_large_en")
bart_lm.generate("The quick brown fox", max_length=30)

# Generate with batched inputs.
bart_lm.generate(["The quick brown fox", "The whale"], max_length=30)
```

Compile the `generate()` function with a custom sampler.
```python
bart_lm = keras_hub.models.BartSeq2SeqLM.from_preset("bart_large_en")
bart_lm.compile(sampler="greedy")
bart_lm.generate("The quick brown fox", max_length=30)
```

Use `generate()` with encoder inputs and an incomplete decoder input (prompt).
```python
bart_lm = keras_hub.models.BartSeq2SeqLM.from_preset("bart_large_en")
bart_lm.generate(
    {
        "encoder_text": "The quick brown fox",
        "decoder_text": "The fast"
    }
)
```

Use `generate()` without preprocessing.
```python
# Preprocessed inputs, with encoder inputs corresponding to
# "The quick brown fox", and the decoder inputs to "The fast". Use
# `"padding_mask"` to indicate values that should not be overridden.
prompt = {
    "encoder_token_ids": np.array([[0, 133, 2119, 6219, 23602, 2, 1, 1]]),
    "encoder_padding_mask": np.array(
        [[True, True, True, True, True, True, False, False]]
    ),
    "decoder_token_ids": np.array([[2, 0, 133, 1769, 2, 1, 1]]),
    "decoder_padding_mask": np.array([[True, True, True, True, False, False]])
}

bart_lm = keras_hub.models.BartSeq2SeqLM.from_preset(
    "bart_large_en",
    preprocessor=None,
)
bart_lm.generate(prompt)
```

Call `fit()` on a single batch.
```python
features = {
    "encoder_text": ["The quick brown fox jumped.", "I forgot my homework."],
    "decoder_text": ["The fast hazel fox leapt.", "I forgot my assignment."]
}
bart_lm = keras_hub.models.BartSeq2SeqLM.from_preset("bart_large_en")
bart_lm.fit(x=features, batch_size=2)
```

Call `fit()` without preprocessing.
```python
x = {
    "encoder_token_ids": np.array([[0, 133, 2119, 2, 1]] * 2),
    "encoder_padding_mask": np.array([[1, 1, 1, 1, 0]] * 2),
    "decoder_token_ids": np.array([[2, 0, 133, 1769, 2]] * 2),
    "decoder_padding_mask": np.array([[1, 1, 1, 1, 1]] * 2),
}
y = np.array([[0, 133, 1769, 2, 1]] * 2)
sw = np.array([[1, 1, 1, 1, 0]] * 2)

bart_lm = keras_hub.models.BartSeq2SeqLM.from_preset(
    "bart_large_en",
    preprocessor=None,
)
bart_lm.fit(x=x, y=y, sample_weight=sw, batch_size=2)
```

## Example Usage with Hugging Face URI

```python
import keras
import keras_hub
import numpy as np
```

Use `generate()` to do text generation, given an input context.
```python
bart_lm = keras_hub.models.BartSeq2SeqLM.from_preset("hf://keras/bart_large_en")
bart_lm.generate("The quick brown fox", max_length=30)

# Generate with batched inputs.
bart_lm.generate(["The quick brown fox", "The whale"], max_length=30)
```

Compile the `generate()` function with a custom sampler.
```python
bart_lm = keras_hub.models.BartSeq2SeqLM.from_preset("hf://keras/bart_large_en")
bart_lm.compile(sampler="greedy")
bart_lm.generate("The quick brown fox", max_length=30)
```

Use `generate()` with encoder inputs and an incomplete decoder input (prompt).
```python
bart_lm = keras_hub.models.BartSeq2SeqLM.from_preset("hf://keras/bart_large_en")
bart_lm.generate(
    {
        "encoder_text": "The quick brown fox",
        "decoder_text": "The fast"
    }
)
```

Use `generate()` without preprocessing.
```python
# Preprocessed inputs, with encoder inputs corresponding to
# "The quick brown fox", and the decoder inputs to "The fast". Use
# `"padding_mask"` to indicate values that should not be overridden.
prompt = {
    "encoder_token_ids": np.array([[0, 133, 2119, 6219, 23602, 2, 1, 1]]),
    "encoder_padding_mask": np.array(
        [[True, True, True, True, True, True, False, False]]
    ),
    "decoder_token_ids": np.array([[2, 0, 133, 1769, 2, 1, 1]]),
    "decoder_padding_mask": np.array([[True, True, True, True, False, False]])
}

bart_lm = keras_hub.models.BartSeq2SeqLM.from_preset(
    "hf://keras/bart_large_en",
    preprocessor=None,
)
bart_lm.generate(prompt)
```

Call `fit()` on a single batch.
```python
features = {
    "encoder_text": ["The quick brown fox jumped.", "I forgot my homework."],
    "decoder_text": ["The fast hazel fox leapt.", "I forgot my assignment."]
}
bart_lm = keras_hub.models.BartSeq2SeqLM.from_preset("hf://keras/bart_large_en")
bart_lm.fit(x=features, batch_size=2)
```

Call `fit()` without preprocessing.
```python
x = {
    "encoder_token_ids": np.array([[0, 133, 2119, 2, 1]] * 2),
    "encoder_padding_mask": np.array([[1, 1, 1, 1, 0]] * 2),
    "decoder_token_ids": np.array([[2, 0, 133, 1769, 2]] * 2),
    "decoder_padding_mask": np.array([[1, 1, 1, 1, 1]] * 2),
}
y = np.array([[0, 133, 1769, 2, 1]] * 2)
sw = np.array([[1, 1, 1, 1, 0]] * 2)

bart_lm = keras_hub.models.BartSeq2SeqLM.from_preset(
    "hf://keras/bart_large_en",
    preprocessor=None,
)
bart_lm.fit(x=x, y=y, sample_weight=sw, batch_size=2)
```